We most recently took up the issue of the Least Publishable Unit of science in the wake of a discussion about first authorships (although I've been talking about it on blog for some time). In that context, the benefit of having more, rather than fewer, papers emerging from a given laboratory group is that individual trainees have more chance of getting a first-author slot. Or they get more of them. This is highly important in a world where the first-author publications on the CV loom so large. Huge in fact.
I've also alluded to the fact that LPU tendencies are a benefit to the conduct of science (as a group enterprise) because it allows the faster communication of results, the inclusion of more methodological detail (critical for replication and extension) and potentially the inclusion of more negative outcomes (which saves the group time).
I have also staked my claim that in an era when most of us find, sort and organize literature with search engine tools from our desktop computers, the "costs" of the LPU approach are minimal.
The recent APS Observer reprinted a column in the NYT that I'd originally missed entitled "The Perils of 'Bite sized' Science" (MARCO BERTAMINI and MARCUS R. MUNAFÒ; Published: January 28, 2012 ). Woot! No offense, commentariat, but you've done a dismal job so far of making an argument for why the LPU approach is so bad or detrimental to the conduct of science, particularly in response to my reasons. So I was really stoked to see this, in hopes of gaining some insight. I was sadly disappointed.
The author start off by putting themselves in a hole:
In a 2010 article, the psychologist Nick Haslam demonstrated empirically that, when adjusted for length, short articles are cited more frequently than other articles — that is, page for page, they get more bang for the buck. Professor Haslam concluded that short articles seem “more efficient in generating scientific influence” and suggested that journals might consider adopting short-article formats.
I quote that because their first argument relates directly:
Suppose that the scientists who will cite your studies will cite them in either format, either the long article or the pair of shorter articles. Based on citations, each of the three articles would have the same impact, but on a per-page measure, the shorter articles would be more “influential.” But this would reflect only how we measure impact, not a difference in actual substance or influence.
Yeah, so what. The "actual substance or influence" is the same for the pair versus the single paper. These are made up of the same data! So the only possible difference is....that's right, the entirely circular argument that a bigger paper is "better". We can dismiss this as tautological.
On to the second main point:
we challenge the idea that shorter articles are easier and quicker to read. This is true enough if you consider a single article, but assuming that there is a fixed number of studies carried out, shorter articles simply mean more articles. And an increase in articles can create more work for editors, reviewers and, perhaps most important, anyone looking to fully research or understand a topic.
PhysioProf brought this up on the prior post and it is arguable when it comes to the editors, reviewers and (PP's point) in submission. But the last point I "challenge". It's utter nonsense in the PubMed/Scopus/Google Scholar/Mendeley/etc era.
we worry that shorter, single-study articles can be poor models of science. Replication is a cornerstone of the scientific method, and in longer papers that present multiple experiments confirming the same result, replication is manifestly on display; this is not always so with short articles. (Indeed the shorter format may discourage replication, since once a study is published its finding loses novelty.) Short articles are also more likely to suffer from “citation amnesia”: because an author has less space to discuss previous relevant work, he often doesn’t do so, which can give the impression that his own finding is more novel than it actually is.
The first point is possibly unique to the psychological sciences and it is true that the "long form" in this subfield (see Journal of Experimental Psychology titles for the model) tends toward multiple self-referential replications with minute variations. This has some benefits but it can get boring, btw. It is possible that lasting scars from reading all those "full story, substantial papers" in an early training stop may have sowed the seeds of my current attitude, I will confess. Novelty? Meh. I think we need to move away from this anyway. And look, encouraging more space for LPU articles is not saying we should do away with or prohibit long, long papers. Not at all. You want to do that? Go ahead. (Just don't force your poor trainees to pay the price, is all). Citation amnesia? Yeaaaah, well let's just say longer form articles are no protection against that! Seriously though, in my view the idea of a LPU is distinguished from character count limitations of articles that are "Brief Communications" or whatnot. To me the LPU tends to address the science- the number of figures and experiments if you will - and not so much the length of the Intro or Discussion. What's another paragraph of Discussion? or even an added page? Again, we are moving to electronic publishing/distributing format in which the costs of adding pages is starting to look like a very bad argument.
The authors end up with a more substantive, scientific concern:
Finally, as we discuss in detail in this month’s issue of the journal Perspectives on Psychological Science, we are troubled by the link between small study size and publication bias.
I dunno. I think they are goal-post moving here. The idea of "Least Publishable" means that it is, well, Publishable. That the data that you are accepting for publication are good in and of themselves. The authors seem to be decrying a situation in which there are not really enough subjects. Of suspecting that "Least Publishable" really means a change to poster-science where any preliminary result that manages to get you a p<0.05 is good to go. I don't see it like this at all. For me the emphasis is that what we are discussing is a good finding, up to the standards of your field. It would be a perfectly acceptable figure or three to include in what people seem to mean by a "complete story". It's just that there are one to three figures instead of seven (plus eleven supplemental figures).