Archive for the 'Science Communication' category
I just had a brilliant idea. Which means that probably someone else has had it before.
Have you ever heard of someone going to an open-mic night at the coffeeshop and laying down a science presentation?
I am disturbingly captivated by the idea of whipping out laptop, projector and talking about some of our recent science at my local java joint.......
One duffymeg at Dynamic Ecology blog has written a post in which it is wondered:
How do you decide which manuscripts to work on first? Has that changed over time? How much data do you have sitting around waiting to be published? Do you think that amount is likely to decrease at any point? How big a problem do you think the file drawer effect is?
This was set within the background of having conducted too many studies and not finding enough time to write them all up. I certainly concur that by the time one has been rolling as a laboratory for many years, the unpublished data does have a tendency to stack up, despite our best intentions. This is not ideal but it is reality. I get it. My prior comments about not letting data go unpublished was addressing that situation where someone (usually a trainee) wanted to write up and submit the work but someone else (usually the PI) was blocking it.
To the extent that I can analyze my de facto priority, I guess the first priority is my interest of the moment. If I have a few thoughts or new references to integrate with a project that is in my head...sure I might open up the file and work on it for a few hours. (Sometimes I have been pleasantly surprised to find a manuscript is a lot closer to submitting than I had remembered.) This is far from ideal and can hardly be described as a priority. It is my reality though. And I cling to it because dangit...shouldn't this be the primary motivation?
Second, I prioritize things by the grant cycle. This is a constant. If there is a chance of submitting a manuscript now, and it will have some influence on the grant game, this is a motivator for me. It may be because I am trying to get it accepted before the next grant deadline. Maybe before the 30 day lead time before grant review when updating news of an accepted manuscript is permitted. Perhaps because I am anticipating the Progress Report section for a competing continuation. Perhaps I just need to lay down published evidence that we can do Technique Y.
Third, I prioritize the trainees. For various reasons I take a firm interest in making sure that trainees in the laboratory get on publications as an author. Middle author is fine but I want to chart a clear course to the minimum of this. The next step is prioritizing first author papers...this is most important for the postdocs, of course, and not strictly necessary for the rotation students. It's a continuum. In times past I may have had more affection for the notion of trainees coming in and working on their "own project" from more or less scratch until they got to the point of a substantial first-author effort. That's fine and all but I've come to the conclusion I need to do better than this. Luckily, this dovetails with the point raised by duffymeg, i.e., that we tend to have data stacking up that we haven't written up yet. If I have something like this, I'll encourage trainees to pick it up and massage it into a paper.
Finally, I will cop to being motivated by short term rewards. The closer a manuscript gets to the submittable stage, the more I am engaged. As I've mentioned before, this tendency is a potential explanation for a particular trainee complaint. A comment from Arne illustrates the point.
on one side I more and more hear fellow Postdocs complaining of having difficulties writing papers (and tellingly the number of writing skill courses etc offered to Postdocs is steadily increasing at any University I look at) and on the other hand, I hear PIs complaining about the slowliness or incapabability of their students or Postdocs in writing papers. But then, often PIs don’t let their students and Postdocs write papers because they think they should be in the lab making data (data that might not get published as your post and the comments show) and because they are so slow in writing.
It drives me mad when trainees are supposed to be working on a manuscript and nothing occurs for weeks and weeks. Sure, I do this too. (And perhaps my trainees are bitching about how I'm never furthering manuscripts I said I'd take a look at.) But from my perspective grad students and postdocs are on a much shorter time clock and they are the ones who most need to move their CV along. Each manuscript (especially first author) should loom large for them. So yes, perceptions of lack of progress on writing (whether due to incompetence*, laziness or whatever) are a complaint of PIs. And as I've said before it interacts with his or her motivation to work on your draft. I don't mind if it looks like a lot of work needs to be done but I HATE it when nothing seems to change following our interactions and my editorial advice. I expect the trainees to progress in their writing. I expect them to learn both from my advice and from the evidence of their own experiences with peer review. I expect the manuscript to gradually edge towards greater completion.
One of the insights that I gained from my own first few papers is that I was really hesitant to give the lab head anything short of what I considered to be a very complete manuscript. I did so and I think it went over well on that front. But it definitely slowed my process down. Now that I have no concerns about my ability to string together a coherent manuscript in the end, I am a firm advocate of throwing half-baked Introduction and Discussion sections around in the group. I beg my trainees to do this and to work incrementally forward from notes, drafts, half-baked sentences and paragraphs. I have only limited success getting them to do it, I suspect because of the same problem that I had. I didn't want to look stupid and this kept me from bouncing drafts off my PI as a trainee.
Now that I think the goal is just to get the damn data in press, I am less concerned about the blah-de-blah in the Intro and Discussion sections.
But as I often remind myself, when it is their first few papers, the trainees want their words in press. The way they wrote them.
*this stuff is not Shakespeare, I reject this out of hand
Utter failure to gain clarity.
It isn't as though I insist that each and every published paper everywhere and anywhere is going to be of substantial value. Sure, there may be a few studies, now and then, that really don't ever contribute to furthering understanding. For anyone, ever. The odds favor this and do not favor absolutes. Nevertheless, it is quite obvious that the "clutter", "signal to noise", "complete story" and "LPU=bad" dingdongs feel that it is a substantial amount of the literature that we are talking about. Right? Because if you are bothering to mention something under 1% of what you happen across in this context then you are a very special princess-flower indeed.
Second, I wonder about the day to day experiences of people that bring them to this. What are they doing and how are they reacting? When I am engaging with the literature on a given topic of interest, I do a lot of filtering even with the assistance of PubMed. I think, possibly I am wrong here, that this is an
essential ESSENTIAL part of my job as a scientist. You read the studies and you see how it fits together in your own understanding of the natural world (or unnatural one if that's your gig). Some studies will be tour-de-force bravura evidence for major parts of your thinking. Some will provide one figure's worth of help. Some will merely sow confusion...but proper confusion to help you avoid assuming some thing is more likely to be so than it is. In finding these, you are probably discarding many papers on reading the title, on reading the Abstract, on the first quick scan of the figures.
So what? That's the job. That's the thing you are supposed to be doing. It is not the fault of those stupid authors who dared to publish something of interest to themselves that your precious time had to be wasted determining it was of no interest to you. Nor is it any sign of a problem of the overall enterprise.
Thoughts on the Least Publishable Unit
Yet, publishing LPU's clearly hasn't harmed some prominent people. You wouldn't be able to get a job today if you had a CV full of LPU's and shingled papers, and you most likely wouldn't get promoted either. But perhaps there is some point at which the shear number of papers starts to impress people. I don't completely understand this phenomenon.
We had some incidental findings that we didn't think worthy of a separate publication. A few years later, another group replicated and published our (unpublished) "incidental" results. Their paper has been cited 12 times in the year and a half since publication in a field-specific journal with an impact factor of 6. It is incredibly difficult to predict in advance what other scientists will find useful. Since data is so expensive in time and money to generate, I would much, much rather there be too many publications than too few (especially given modern search engines and electronic databases).
For some reason the response on Twittah to the JSTOR downloader guy killing himself has been a round of open access bragging. People are all proud of themselves for posting all of their accepted manuscripts in their websites, thereby achieving personal open access.
But here is my question.... How many of you are barraged by requests for reprints? That's the way open access on the personal level has always worked. I use it myself to request things I can't get to by the journal's site. The response is always prompt from the communicating author.
Seems to me that the only reason to post the manuscripts is when you are fielding an inordinate amount of reprint requests and simply cannot keep up. Say...more than one per week?
So are you? Are you getting this many requests?
Neuropolarbear has a post up suggesting that people presenting posters at scientific meetings should know how to give the short version of their poster.
My favorite time to see posters is 11:55 and 4:55, since then people are forced to keep it short.
If you are writing your poster talk right now, remember to use a stopwatch and make your 5 minute version 5 minutes.
Don't even practice a longer version.
I have a suggestion.
Ask the person to tell you why they are there! Really, this is a several second exchange that can save a lot of time. For noobs, sure, maybe this is slightly embarrassing because it underlines that even if you have managed to scope out the name successfully you do not remember that this is some luminary in your subfield. Whatever. Suck it up and ask. It saves tremendous time.
If you are presenting rodent behavioral data and the person indicates that they know their way around an intravenous self-administration procedure, skip the methods! or just highlight where you've deviated critically from the expected paradigms. If they are some molecular douche who just stopped by because "THC" caught their eye then you may need to go into some detail about what sort of paradigms you are presenting.
Similarly if it is someone from the lab that just published a paper close to your findings, just jump straight to the data-chase. "This part of figure 2 totally explains what you just published"
Trust me, they will thank you.
As Neuropolarbear observes, if you've skipped something key, then this person will ask. Poster sessions are great that way.
Watch this video. If you are anything like me, you have essentially zero understanding of what this guy is talking about. To start with. It very rapidly devolves into technical jargon and insider references to things that I don't really understand.
But you know what?
After awhile you probably kinda-sorta pick up on what is going on and can kinda-sorta understand what he's telling his audience. I think I am impressed at that part.
Watching this through also makes you realize that a computer-geek presentation really doesn't differ much from the talks we give in our science subfields. And if you skip through to the Q&A about two-thirds through, you'll see that this part is familiar too.
I think I may just make this a training video for my scientific trainees.
Coincidentally, a couple of twitter remarks today also reinforced the idea that what we are all really after is other people who cite our work.
More people should cite my papers.
I totally agree. More people should cite my papers. Often.
was a bit discouraged when a few papers were pub'ed recently that conceivably could have cited mine
Yep. I've had that feeling on occasion and it stings. Especially early in the career when you have relatively few publications to your name, it can feel like you haven't really arrived yet until people are citing your work.
Before we get too far into this discussion, let us all pause and remember that all of the specifics of citation numbers, citation speed and citation practices are going to be very subfield dependent. Sometimes our best discussions are enhanced by dissecting these differences but let's try not to act like nobody recognizes this, even though I'm going to do so for the balance of the post....
So, why might you not be getting cited and what can you do about it? (in no particular order)
1) Time. I dealt with this in a prior post on gaming the impact factor by having a lengthy pre-publication queue. The fact of the matter is that it takes a long time for a study that is primarily motivated by your paper to reach publication. As in, several years of time. So be patient.
2) Time (b). As pointed out by Odyssey, sometimes a paper that just appeared reached final draft status 1, 2 or more years ago and the authors have been fighting the publication process ever since. Sure, occasionally they'll slip in a few new references when revising for yet the umpteenth time but this is limited.
3) Your paper doesn't hit the sweet spot. Speaking for myself, my citation practices lean this way for any given point I'm trying to make. The first, best and most recent. Rationale's vary and I would assume most of us can agree that the best, most comprehensive, most elegant and all around most scientifically awesome study is the primary citation. Opinions might vary on primacy but there is a profound sub-current that we must respect the first person to publish something. The most-recent is a nebulous concept because it is a moving target and might have little to do with scientific quality. But all else equal, the more recent citations should give the reader access to the front of the citation thread for the whole body of work. These three concerns are not etched in stone but they inform my citation practices substantially.
4) Journal identity. I don't need to belabor this but suffice it to say some people cite based on the journal identity. This includes Impact Factor, citing papers on the journal to which one is submitting, citing journals thought important to the field, etc. If you didn't happen to publish there but someone else did, you might be passed over.
5) Your paper actually sucks. Look, if you continually fail to get cited when you think you should have been mentioned, maybe your paper(s) just sucks. It is worth considering this. Not to contribute to Imposter Syndrome but if the field is telling you to up your game...up your game.
6) The other authors think your paper sucks (but it doesn't). Water off a duck's back, my friends. We all have our opinions about what makes for a good paper. What is interesting and what is not. That's just the way it goes sometimes. Keep publishing.
7) Nobody knows you, your lab, etc. I know I talk about how anyone can find any paper in PubMed but we all need to remember this is a social business. Scientists cite people they know well, people they've just been chatting with at a poster session and people who have just visited for Departmental seminar. Your work is going to be cited more by people for whom you/it/your lab are most salient. Obviously, you can do something about this factor...get more visible!
8) Shenanigans (a): Sometimes the findings in your paper are, shall we say, inconvenient to the story the authors wish to tell about their data. Either they find it hard to fit it in (even though it is obvious to you) or they realize it compromises the story they wish to advance. Obviously this spans the spectrum from essentially benign to active misrepresentation. Can you really tell which it is? Worth getting angsty about? Rarely.....
9) Shenanigans (b): Sometimes people are motivated to screw you or your lab in some way. They may feel in competition with you and, nothing personal but they don't want to extend any more credit to you than they have to. It happens, it is real. If you cite someone, then the person reading your paper might cite them. If you don't, hey, maybe that person will miss it. Over time, this all contributes to reputation. Other times, you may be on the butt end of disagreements that took place years before. Maybe two people trained in a lab together 30 years ago and still hate each other. Maybe someone scooped someone back in the 80s. Maybe they perceived that a recent paper from your laboratory should have cited them and this is payback time.
10) Nobody knows you, your lab, etc II, electric boogaloo. Cite your own papers. Liberally. The natural way papers come to the attention of the right people is by pulling the threads. Read one paper and then collect all the cited works of interest. Read them and collect the works cited in that paper. Repeat. This is the essence of graduate school if you ask me. And it is a staple behavior of any decent scientist. You pull the threads. So consequently, you need to include all the thread-ends in as many of your own papers as possible. If you don't, why should anyone else? Who else is most motivated to cite your work? Who is most likely to be working on related studies? And if you can't find a place for a citation....
When you are reviewing papers for a journal, it is in your best interest to stake out papers most like your own as "acceptable for publication".
If it is a higher IF than you usually reach, you should argue for a manuscript that is somewhat below that journal's standard.
If it is a journal in which you have published, it is in your interest to crap on any manuscript that is lesser than your typical offerings.
We most recently took up the issue of the Least Publishable Unit of science in the wake of a discussion about first authorships (although I've been talking about it on blog for some time). In that context, the benefit of having more, rather than fewer, papers emerging from a given laboratory group is that individual trainees have more chance of getting a first-author slot. Or they get more of them. This is highly important in a world where the first-author publications on the CV loom so large. Huge in fact.
I've also alluded to the fact that LPU tendencies are a benefit to the conduct of science (as a group enterprise) because it allows the faster communication of results, the inclusion of more methodological detail (critical for replication and extension) and potentially the inclusion of more negative outcomes (which saves the group time).
I have also staked my claim that in an era when most of us find, sort and organize literature with search engine tools from our desktop computers, the "costs" of the LPU approach are minimal.
The recent APS Observer reprinted a column in the NYT that I'd originally missed entitled "The Perils of 'Bite sized' Science" (MARCO BERTAMINI and MARCUS R. MUNAFÒ; Published: January 28, 2012 ). Woot! No offense, commentariat, but you've done a dismal job so far of making an argument for why the LPU approach is so bad or detrimental to the conduct of science, particularly in response to my reasons. So I was really stoked to see this, in hopes of gaining some insight. I was sadly disappointed. Continue Reading »