So you finally got your paper accepted, the proofs have come and been returned in 48 hrs (lest some evil, unspecified thing happen). You waited a little bit and BOOM, up it pops on PubMed and on the pre-publication list at the journal. The article is, for
all most intents and purposes, published.
Now get back to work on that next paper.
But there's that nagging little thought.....it isn't really published until it gets put in a print issue. Most importantly, you don't know for sure which year it will be properly published in, so the citation is still in flux. So you look at the number of items below yours in the pre-print list, figure out approximately how many articles are published per issue in the journal and game it out. Ugh.... four months? Six? EIGHT????
WHY O WHY gods of publishing?? WHY must it take so long???????
Whenever I've heard a publishing flack address this it has been some mumblage about making sure they have a smooth publication track. That they are never at a loss to publish enough in a given issue. And they have to stick to the schedule don't you know!
(except they don't. Volumes are pretty fixed but you'll notice a "catch up" extra issue of a volume now and again.)
Well, well, well. Something I've never considered was raised in a blog post at Scholarly Kitchen. An article by Frank Krell in Learned Publishing (I swear I'm not making that journal title up) asks if publishers might be using this to game the Impact Factor of their journals.
Dammit! Totally true. Think about it...
Now, before I get started, the Scholarly Kitchen, good publisher flacks that they are, caution:
To me, there needs to be some evidence — even anecdotal — that editors are purposefully post-dating publication for the purposes of citation gaming. Large January issues may be one piece of evidence; however, it may also signal the funding and publication cycle of academics. I’d be more interested to know whether post-dating conversations are going on within editorial boards, or whether authors have been told that the editor is holding back their article to maximize its contribution to the journal’s impact factor.
But this only really addresses the specific point that Krell made about pushing issues around with respect to the start of a new year.
There's a larger point at hand. One of the points of objection I've always had about the IF calculation is that the two-year window puts a serious constraint on the types of citations that are available in certain kinds of science. The kind where it just takes a lot of time to come up with a publishable data set.
Take normal old, run of the mill behavioral experiments that can be classified as behavioral pharmacology (within which a lot of substance abuse studies live). Three to four months, easily, just to get an animal experiment done. Ordering, habituating, pre-training, surgeries and recovery...it takes time. A typical study might be 3-4 groups of subjects, aka, experiments. That's if you get lucky. Throw in some false avenues and failed experiments and you are easily up to 6 or 8 groups. Keep in mind that physical resources like operant boxes, constraints such as the observation window (could be a 6 hr behavioral experiment, no problemo) and available staff (not everyone has a tech) really narrow down the throughput. You can't just "work faster" or "work harder" like supposedly is possible at the bench. The number of "experiments" you can do don't scale up with time spent in the lab if you are doing behavioral studies with some sort of timecourse. The PI may not even be able to do much by throwing more people into the project even if the laboratory does have this sort of flexibility.
So here you are, Joe Scholar, reading your favorite journal when BLAM! You see an awesome paper that gives you a whole line of new ideas that you could and should set out to studying. Like, RIGHT FREAKING NOW!!! Okay, so suppose money is not an issue and you don't have anything else particularly pressing. Order some animals and off you go.
It's going to be a YEAR minimum to complete the studies. A month to write up the draft, throw in three months for peer review and another month for the journal to get it's act together. Thus, if things go really, really well for you there is only a 6 month window of slack to get a citation in for that original motivating paper before the 2 year IF citation window elapses.
Things never go that well.
In my view this makes it almost categorically impossible for a publication to garner IF credit for a citation that is the most meaningful of all. A citation from a paper motivated almost entirely by said prior work.
The principle extends though. Even if you only see the paper and realize you need to incorporate it into your Discussion or Introduction, the length of time the paper is available with respect to the IF window matters. If there were just some way journals could extend that window between general availability of a work and the expiration of the IF window then this would, statistically, boost the number of citations. If the clock doesn't start running until the paper has been visible for 6 months....say, how could we do that? How....? Hmm.
Ah yes. Let it languish in the "online first" archive! Brilliant! It goes up on PubMed and people can read the paper. Get their experiments rolling. Write the findings into their Intros and Discussions.
I agree with the Scholarly Kitchen post that we don't know that this is why some journals keep such a hefty backlog of articles in their pre-print queue. Having watched a handful of my favorite journals maintain anywhere from six to thirteen month offsets over periods of many months to years, however, I have my suspicions. The journals I pay attention to have maintained their offsets over at least a decade if you assume the lower bound of about 4-5 months (and trust that my spot-checking is valid as a general rule). The idea that they do this to avoid publication dryspells is nonsense, they have plenty of accepted articles on a frequent enough basis so that they could trim down to, say, 2-3 months of backlog. So there must be another reason.