Take me as I am, and my paper as it is?

Mar 06 2013 Published by under Academia

I rarely opine on publishing or publishing practices. This is mostly because I feel like I'm not 'good' at them. My papers don't often get in with minor revisions. Often I've got a ridiculously puffed head about my own work (apparently), and send them to places which reject them out of hand, or suggest major revisions and piles of new experiments which we just cannot do for various reasons. Then the paper ends up shuttled around. Send it in, wait 3 months, get rejected. Reformat (+2 mo or even more depending on collaborators and how much other crap you've got on your plate at the time) and send it out again. Years go by. In the meantime, suggested reviewers begin to hate me and I run out of new ones (only so many people in the field!).

I really wish there was a way to get out of this. This sort of thing contributes to the long lag times and slowness of scientific advance. Sure, it'd be great if everyone just wised up to the point of knowing EXACTLY which journal their work is perfect for and if reviewers were always kind enough not to suggest that the true mechanism needs to be found with another 5 years worth of work. But clearly, we're humans and this isn't going to happen. I know loads of people who are full PIs with many years of experience who can't make this choice "wisely". This is especially true if you're stepping slightly outside of your "home" field.

But then I had a thought. What if manuscript submission could be as good as a one-shot?

Like this: you submit a paper to a large umbrella of journals of several "tiers". It goes out for review. The reviewers make their criticisms. Then they say "this paper is fine, but it's not impactful enough for journal X unless major experiments A, B, and C are done. However, it could fit into journal Y with only experiment A, or into journal Z with only minor revisions". Or they have the option to reject it outright for all the journals in question. Where there is discrepancy (as usual) the editor makes the call.

This would help several things:
1. It would save the serial rounds of resubmission and rejection and reviewers having to see the same manuscript (I've seen this from both ends). As a reviewer, you would only have to see it once (and again on major revisions, of course).

2. It would give the authors the option: improve the manuscript for a "higher" journal, or publish basically as is and get it off the desk. Yes, yes, impact factors shouldn't matter. But they still do, and until they don't, this is a choice we have to make. Wouldn't it be nice to make it in one shot? Without the 6 month period of resubmission each time?

3. It would dramatically decrease the amount of time involved. With this kind of organization, you'd have the option to publish quickly, or to take the time to really add experiments and change the manuscript. Some people will ALWAYS take this option. Some people need the pubs for their tenure package and need it nownownow.

But what would it involve?

Journals working together. Probably with editors overseeing several journals. For many academic journals, this is not so far from reality, as high-end PIs take on the editorship of one journal and then head over to another. And many are overseen by the same publishing houses.

And unfortunately, I don't think it will ever be a reality. Journals have no desire to organize and work together this way. And many scientists will no doubt find reasons as to why this is a terrible idea (publishing science with an incomplete mechanism!!!). But the reality is, this happens all the time. People start higher and drop lower, and the years go by. Wouldn't it be nice to see it get a little better? It's probably just wishful thinking, but I kind of wish for it nonetheless.

47 responses so far

  • Janne says:

    Neat idea. A review pool, basically, shared by a large group of journals. Would have to be large and diverse enough to span most papers' possible destinations.

    Another idea, actually implemented in a couple of places now, is that you can ask for a straight up-or-down review of your paper. The reviewers can either accept it (with minor needed revisions) or reject it. No "major revision". No new experiments. No "revise and resubmit". It's good or it's not.

    The benefit is that reviews can be a bit shorter and more to the point, as reviewers can focus simply on whether the paper is fit for the journal or not. That translates to less work for the reviewer and faster turnaround time for the paper. You may get a definite answer within a month or so, and are then free to shop it around to another journal if it was rejected.

  • Yoder says:

    I've had reviewers at the "high" end of a paper's submission ladder suggest alternative venues. And in one case, they were bang on—the MS didn't get in at the glamormag I sent it to first, but it landed at the society journal one of the reviewers said would be a good fit.

    Not that I've ever thought this through before, but it totally makes sense that if, as a reviewer, you're suggesting rejection because a paper isn't flashy enough, it's only decent to say what journal/tier would be more appropriate. It's not as official as what you're suggesting, but if you end up suggesting the same reviewers, it'd come close.

  • Jussi Hakala says:

    My colleague suggested exactly the same a few weeks ago. It's a brilliant idea, and I think it will happen sooner or later. The amount of work done by reviewers, editors and authors would be reduced significantly (name one professor who is not overworked). It's not going to happen overnight, but I think eventually this will be the way scientific publishing works. At least it should.

  • Frontiers is a little bit like that, only that you're not immediately assigned your tier, but your paper gets promoted after a while, if it fits certain criteria. The system is still in its infancy, but I think the idea is not bad and if the implementation gets tweaked and optimized in iterated cycles, this could really take off. Maybe that's why Nature Publishing Group invested in them just recently.

    But in general, of course, this constant re-submitting has to end - and believe me, it's not just you, it's everybody whose papers get shuttled around. This is an antiquated way of running things and it has to stop.

    • Scicurious says:

      But doesn't it only get promoted AFTER it's published? I feel like this would be better if it was before publishing.

      • qaz says:

        That's the beauty of it. There's no fight of "if you do this, it will have higher impact" or "if you do this, we'll let you into the magic glamour realm". ALL papers are published at the same level. If the paper is important (which is much easier to determine post-publication), then the authors get to write a second review/discussion paper about the topic.

        The general audience rarely wants all that detail anyway. They want the story. Put the snazzy story in Science. Put the complete details in JNeurosci. This post-publication review effectively does that. First paper (details) in Frontiers in the Neuroscience of Bunny Hopping. Second paper (story) in Frontiers in Neuroscience.

        • First paper (details) in Frontiers in the Neuroscience of Bunny Hopping. Second paper (story) in Frontiers in Neuroscience.

          WOO! HOO! You might get elevated from a sub-dump journal to a regular dump journal!!!

          • qaz says:

            I'm not going to argue the quality of Frontiers, here. (My personal experience with Frontier's impact seems to be different from most.)

            This is the only neuro/bio replacement system that I've seen actually WORK. (Physics seems to work well with ArXiV, but bio won't accept that because they're not reviewed.)

            As has been discussed, the whole "pass the reviews on down" is garbage because (1) you've wasted years fighting the high-impact journal, (2) you want to go horizontal rather than down, and (3) you probably want new reviewers anyway.

            Look, any journal that is going to take most of the papers is going to get called a "dump journal". The question is how do we make the "upward tiers" be considered better than a dump journal?

  • MindLess says:

    But some of this is actually reality! If you submit to PLoS, they recommend other journals (like PLoS ONE or others in their own range) when rejecting a paper. Or the BMC journals, where you can also get recommendations if they think aother BMC journal is more fitting.

    And as a reviewer, you can always give recommendations as well. Which I did twice already, if I knew a better fitting journal.

  • Bob O'H says:

    Something very like This is being tried, at Peerage of Science, in ecology & evolution. It seems to be developing quite well, several journals have taken part.

    As an editor I can see the attraction, but I would still want the option of asking for my own reviews.

    BTW, the endless requests for new experiments you squidgy biologists get sounds silly from over here at the whole organism end of biology. It suggests a real problem - you can't all be incompetent, not knowing what you need to do to make your case.

    • Scicurious says:

      LOL, maybe we all ARE incompetant and no one knows it. :)

      Really, I think the pressure to publish in the highest IF journal as possible for a given piece of work is what does it, we are constantly pushed to "swing for the fences". The reality is that most work can't make it, and that's ok! I just wish we could stop trying the first five rounds.

    • Janne says:

      Bob, thanks for mentioning Peerage of Science.

      One note though:

      In Peerage of Science editors *can* freely ask their own trusted reviewers to do peer reviews, in addition to all those engaging on their own initiative.

  • "Welcome to the Neuroscience Peer Review Consortium – an innovation in science publishing.

    The Neuroscience Peer Review Consortium is an alliance of neuroscience journals that have agreed to accept manuscript reviews from other members of the Consortium. Its goals are to support efficient and thorough peer review of original research in neuroscience, speed the publication of research reports, and reduce the burden on peer reviewers."

    Read more at the link above.

    Here's a list of the participating journals.

  • qaz says:

    This is kind of similar to the way that the Frontiers journals do it. The key is to move UP, not down. You submit an article to a specialist journal, which accepts most of the submissions. Your paper is published and citeable. If your paper gets lots of traction, you get invited to write a summary/review/discussion paper (which also gets reviewed) for a more general field-journal.

    The problem with your suggestion of being given the option to publish in journal A immediately or try for B is that it is the same dilemma we have today. I could send my paper to society-journal-A where it has a 90% chance of getting in or send it to glamour-mag-B where it has a 10% chance of getting in. All you've done is changed that 90% to 100%, but we're faced with the same dilemma - put it out now and take the lower impact factor (positive = published, negative = lower impact) or risk for the fancier journal.

    • Scicurious says:

      Well, but you still get the choice to do the extra experiments, you just also get the choice to improve the manuscript, and you don't have to keep submitting over and over and over. At least it saves some time and reviewer headaches there!

  • Comradde PhysioProffe says:

    Cell press already does this.

  • DrugMonkey says:

    The existing efforts are total failures. Guess why? There's a critical flaw in your scheme, Sci.

    Also...any PI has the power to minimize this horrible situation you describe. Tis in their power not to put up with the GlamourDouuche approach to publishing if they want to.

    • qaz says:

      I'm curious, DM, what do you see as the critical flaw in Scicurious' plan? (What do you see as the flaws in the other ones?)

      And, yes, any PI has the power to risk their career by rejecting the GlamourMag game. The problem is that we have to spend a huge amount of our effort playing these games instead of doing science. Do I risk getting scooped for another 10 points of impact factor? If I go to the low-impact journal, my paper is citeable, but no one will see it. If I put this paper in the low-impact journal, it will be available for my grant renewal, but it may cost me later when I go up for tenure or when my department puts me up for a fancy monetary award. Do I send my grant to NIH now or wait until I have some more preliminary data?

      My point is that in every case the right answer to each of these questions depends on your situation. And success in science depends as much on getting these answers right as it does on doing good science. [See, for example, grantsmanship blogs, like that written by DrugMonkey...]

      • drugmonkey says:

        what do you see as the critical flaw in Scicurious' plan?

        Nobody bucking for IF immediately goes (significantly) down. They go (approximately) lateral and hope to get lucky. The NPRC is a classic example. At several critical levels there is no lateral option. And even if there was, the approximately equal IF journals are in side-eyeing competition...me, I sure as hell don't want the editors of Biol Psychiatry, J. Neuro and Neuropsychopharmacology knowing that I've been rejected by one or two of the other ones first.

        I also contest the degree to which a significantly "lower" journal thinks that it is, indeed, lower and a justifiable recipient of the leavings. Psychopharmacology, for example, is a rightful next stop after Neuropsychopharmacology but somehow I don't think ol' Klaus is going to take your manuscript any easier just because the NPP decision was "okay, but just not cool enough". Think NPP and Biol Psych are going to roll over for your Nature Neuroscience reject? hell no. Not until their reviewers say "go".

        the right answer to each of these questions depends on your situation.

        Right. And personally I advocate balance. So if Sci is *constantly* getting into these situations, she is not achieving the right balance. (Yes, I grasp that a postdoc or grad student's ability to affect these is limited...but you chose the PI/lab and you can advocate/not for surety over IF too, ya know)

        • scicurious says:

          Hmmm, yes. A good point. No one wants to admit they are a next stop journal. They just say they are...more specific in the field, say. Makes it more rarefied.

          And I'm not constantly getting in the situation, but it's happened enough times that I'm very tired of the cycle.

          • drugmonkey says:

            We each have to find our own toleration level....

            My most general advice for this is look about you at those who are succeeding at present time. In your approximate subfield. Not just looking at the elite, either. Look at the entire subfield. Who is surviving with what kind of approach?

            That sets your target level.

            If your level of IF chasing puts you in the top 10%, top 25%...... it probably isn't the *only* way to get it done. If it puts you in the bottom 33%...well, that might just be life.

    • scicurious says:

      As I said, DM, I'm not very good at this publishing game, as I keep dealing with rounds of rejection even when I think I know the journals.

      So what is the critical flaw? I'm definitely not smart enough to see it.

      • antistokes (allison l. stelling) says:

        Ah, if you're not good at the publishing game, why are you playing?

        This is a pretty good example of why the USA needs more positions for scientists who design the experiments and help the lab techs do the lab work; but don't have to worry about Editorial comments and the occasional abrasive reviewer.

        I mean, my article for PLoS ONE started out as a submission to JACS. The editor said "Interesting but not general audience" so I re-wrote it in about a day or so and submitted it to Biochemistry- who said it was "Interesting but outside the scope of the journal." I agreed with the editor (there's animal and human data in it, so it needed an Ethics Check), and a few days later re-formatted and re-wrote it and submitted it to PLoS ONE.

        PLoS rejected it, but the editor said I should re-write and re-submit. I agreed with the rejection; the paper was a mess and I had been under pressure to submit before I had a proper analysis section for the patient tumor data. My co-authors were a bit dismayed by some of the harsher comments, but honestly they made me laugh (one said it was "hopeless", which I was pretty sure any decent analytical chemistry professor would say about the original submission, but I had to get my co-authors to see that). I spent a few months coming up with an analysis that would pass muster and double checking it with a stats prof in a different country. Then I asked several USA profs- who were not my PhD adviser- (I was doing a German postdoc) to give it a look-see in exchange for mention in the Acknowledgements. I wrote out my own somewhat snarky Response to Reviewers, which included a few pointed comments about the difficulties involved in biomedical research to the chemistry prof. Then, I resubmitted and it was accepted first try, no revisions. Needed to make a few changes for production, but all easy typesetting stuff. Mind you, this is for my *postdoc* work.

        Heck, I've been responding to referee comments since grad school; the PI saw I had an interest and spent a reasonable amount of his valuable time mentoring me though the ACS journal system. I agree with you about how....fractured your current publishing system seems to be though; sounds like it could use some consolidation.

      • drugmonkey says:

        The flaw is in thinking that the NPRC structure is a solution. I'd like to see some numbers on how many people have availed themselves of this approach.

  • namnezia says:

    With this Neuroscience journal consortium mentioned by Neurocritic, I have actually seen papers re-sent to me for review from a different journal after being rejected by another one. All I had to answer was, yes, it is suitable for this journal and it was accepted. Truth was, I thought it was acceptable for the "higher journal", but I guess the other reviewers disagreed.

    • Has anyone else other than namenzia been involved with NPRC, as either an author or a reviewer? Sounds like the paper in question here had a pretty good outcome (other than the initial rejection from the other reviewers).

      • I thought about using this consortium, and even emailed the reviewing editor asking if ze thought the reviews would land it in a 'next stop' journal. Ze said no, the reviews raised too many serious concerns. So we sent it to that next stop journal separately, and got new reviews, and they only asked for revisions.
        I am curious how often the reviews say 'nice paper, but not for this journal' (actually just happened to me) compared to sounding like 'this research has flaws that make it next to worthless' (happened to me also...for the same paper)
        I agree that a big problem would be the journals and reviewers all agreeing on the proper lineup of journals. No journal is going to want to sit tenth in line. I also think that not having the option to spin the roulette wheel for new reviewers could be awful if you get one of those particularly nasty reviewers. It could sink your paper much lower than necessary.

    • drugmonkey says:

      but if you are being asked to review it, doesn't this defeat the purpose?

  • TheThirdReviewer says:

    Yep. Lets keep judging papers based on what we think the impact will be, instead of letting the science speak for itself. We've all seen good papers in poor journals and terrible papers in GlamourMags.

    Why not go all the way to allowing the extra pieces of info to be published after the fact as a supplement to the first paper? That way we can end up with epic tomes of juicy science goodness.

  • eeke says:

    Sol Snyder just wrote an opinion piece in PNAS about this very problem. He said that in the old days, you could submit a bit of data and get it published - it didn't have to be an entire goddamn story with years and years of grueling, costly, and career-ending work behind it. Here is the link:

    http://www.pnas.org/content/110/7/2428.full?sid=6d3f52c4-a841-4fca-a694-311f80a8b74f

    • BugDoc says:

      This is thoughtful opinion piece that echoes one written by Hidde Ploegh several years ago. Snyder suggests that journals "should provide expeditious reviewing and reasonable requests for revision. Their cadre of reviewers should be trained in this modified approach."

      Here's the problem with that suggestion. Journals do not train their reviewers. In fact, training of young scientists in the review process is completely informal and left up to each mentor. Moreover, reviewers generally do not get feedback, other than looking to see how well their reviews mesh with the other anonymous reviewers, who may or may not be reasonable. Nor is there in most cases any quality control for reviews. I find it uncommon (although it does thankfully still happen) that editors overrule unreasonable requests from reviewers. It seems more common for authors to get told to just address all the reviewers' comments and then it's up to the authors to argue with the editor about it.

  • Interesting idea. But isn't it possible that in this system, publication-hungry researchers (read "most") will take the easy way out and publish in the lower-tier journals without performing additional experiments? Now some of this might be justified, but it might also lead to an even greater abundance of incomplete and mediocre papers.

    • scicurious says:

      I figure people will do the extra experiments as long as IF is king.

    • qaz says:

      As I noted in my reply to DM above, sometimes the right answer is to publish NOW (because, for example, your grant review said "if you can publish this, we'll give you your funding"), and sometimes the right answer is to fight for the higher impact journal (because, for example, you've been told by your tenure committee that you don't have enough high-impact papers).

      It depends on your situation. Researchers are hungry for many high impact publications. Sometimes one has to push the "many" part and sometimes one has to push the "high-impact" part of that equation.

      I wish it didn't have to be that way.

      Also, in my experience high-impact is (if anything) anticorrelated with completeness. And I'm not convinced that journal impact factor is correlated (or anticorrelated) with mediocrity. Lots of mediocre papers in high-impact journals; lots of great papers in low-impact journals. Journal impact factor is correlated with number of citations, and that's about it. Unfortunately, number of citations is correlated with the number of people who've read your work.

      • drugmonkey says:

        in my experience high-impact is (if anything) anticorrelated with completeness.

        the only thing correlated with completeness is sustained effort at a topic by multiple laboratories over a long period of time. The end.

    • drugmonkey says:

      publication-hungry researchers (read "most") will take the easy way out and publish in the lower-tier journals without performing additional experiments?

      so what? there are *always* some additional experiments to be done. And if interesting enough, someone else will do them. Or the original authors will. and meantime, the frigging information is published and available for the next person to build upon. and maybe there will be partial replication along the way. and science will advance at a more rapid clip.

  • I love this idea!

    In interdisciplinary earth/enviro sci, we run into problems when biologists review hydrology papers, or geologists review biology papers. Each discipline has a different approach - often a biol will tell a hydrol that they don't have enough 'replicates' which is usually untenable with ltd funds & watersheds usually >1km2 in size.

    With an umbrella reviewing scheme for a range of journals, we could avoid a lot of reject/resubmit/etc.

  • bsci says:

    This proposal as some similarities to:
    Kravitz, D.J., Baker, C.I., 2011. Toward a new model of scientific publishing: Discussion and a proposal. Frontiers in Computational Neuroscience 5, 1–12.

    They don't quite stake out the same goal of publications in existing journals, but say that once the authors have decided they've sufficiently responded to reviewers, it is published somewhere.

    That article is part of a series on other visions of publication: http://www.frontiersin.org/computational_neuroscience/researchtopics/Beyond_open_access_visions_for/137

  • Aubrey Tauer says:

    I too love this idea. In my field, it isn't always possible to do additional experiments, like say when your field site gets involved in a civil war, or your animal population dies off. I have a paper about to submit, that I'm sure they will say needs more data, but the country in questions shut down the biological station and it is just not possible.

    • I think requests for more data are quite frequently (but not always) bogus. Review the paper for what it is, and what conclusions CAN be drawn from the data that exist. Some experiments, for example those involving terminal procedures in mammals, may just be too damn expensive to replicate on a reasonable timetable.

      • qaz says:

        It's generally unspoken, but usually, what reviewers really mean when they ask for new data is that they are asking the authors to either (1) remove/reduce the extravagant claims or (2) provide additional support for those extravagant claims. Since its the extravagant claims that got them into the GlamourMag in the first place, most authors are reluctant to remove the extravagant claims.

        Perhaps reviewers should be made to state what extravagant claims can be removed instead of doing the additional analyses. (I know I generally try to.)

        In my field, additional data is usually additional analyses, which are often not that difficult, and generally improve the paper significantly. But YMMV.

  • cc says:

    You academics need agent/editors.

  • anon says:

    Here's a thought: just say no to additional, reviewer-suggested experiments. Politely, with a good argument for why the story is complete 'enough' (not lacking controls, sufficiently supports the conclusions). If you need to, dial back on the impact, edit the text to make the impact clearer, make your case. There has clearly been inflation in how much data & what range of combined work qualifies as a high IF paper, or even a mid-range society journal, as Snyder's article points out. If that means rejection, then take it & move on. If everyone did this, problem solved. There's no law that says you have to accede to every reviewer request.

  • yot says:

    You want to look at this: http://www.peerageofscience.org/ It's a different model: it's a peer-review scheme that is NOT tied to any journal. Once the peer-review is done, the actual journals come and pick the ones they want. I think it's very cool.

  • Anon says:

    I think the F1000 Research model is good. Publish first and then let the peer review happen openly post publication. If the review suggests revisions then you can create a new version addressing these points. It goes along way in cutting out all the nonsensical waiting, and the tiresome jostling to publish in a high IF journal.

Leave a Reply