So long, farewell, etc...
My suitcase lies open on the bed, awaiting the cooling process to make my hair iron packable. If I knew more physics, I might be able to discuss why this takes so long. Lucky for you, physics bored me.
I have material for at least two more posts that may happen soon, or they may wait until next week. Tomorrow I must jet off to a meeting in DC. I hate it when my day jobs interfere with le blogging.
I had a great time blogging Experimental Biology. I planned my meeting in more detail than I have since my very first conference in fellowship. I saw a greater breadth of sessions, took better notes, and synthesized my thoughts. I learned a hell of a lot this year. Even if I am not an official meeting blogger, I may start approaching conferences in this manner. I have always learned by organizing the information in writing. In college and medical school I used a typewriter; now my thoughts go out in the blogosphere or in my Dropbox. Same process, different tools.
Of course, I really enjoyed catching up with my IRL and OTI friends. You all know who you are. And even if I never accomplished anything else, I will leave behind a Storify of the taint talk.
They say it taint over (sorry, I had to go there) 'til the fat lady sings, and she is warming up for her farewell to #EB2012.
No, I am not referring to d00dly ruler tricks here. We all know that no one actually measures.
Fellow Scientopian DrugMonkey has blogged a perfect storm of a discussion on impact factor and glamour science. Click on over and read the comments (warning: your head may explode). This argument will sound familiar to most readers. Basically, everyone knows that the impact factor (IF) can be gamed by journals. IF reflects some sort of average citation rate for a journal; it says nothing about the quality of any given paper. Some people make the point that IF keeps the measurement of productivity from being solely a pub count. Others add that IF is imperfect, but it's "what we have."
At Science Online I attended a discussion of Alternative Metrics or altmetrics:
As the volume of academic literature explodes, scholars rely on filters to select the most relevant and significant sources from the rest. Unfortunately, scholarship’s three main filters for importance are failing:
- Peer-review has served scholarship well, but is beginning to show its age. It is slow, encourages conventionality, and fails to hold reviewers accountable. Moreover, given that most papers are eventually published somewhere, peer-review fails to limit the volume of research.
- Citation counting measures are useful, but not sufficient. Metrics like the h-index are even slower than peer-review: a work’s first citation can take years. Citation measures are narrow; influential work may remain uncited. These metrics are narrow; they neglect impact outside the academy, and also ignore the context and reasons for citation.
- The JIF, which measures journals’ average citations per article, is often incorrectly used to assess the impact of individual articles. It’s troubling that the exact details of the JIF are a trade secret, and that significant gaming is relatively easy.
I hoped that the discussion would provide a gentle introduction to the concept of altmetrics. My hopes died, and I felt adrift during the session. I have played with some of the new measures on the altmetrics site. I get what these researchers want to do; I just have not figured out how each measure fits into a bigger picture. [I do appreciate more of the discussion now.]
For a kinder, gentler introduction to the topic, I recommend a piece in the current issue of The Chronicle of Higher Education that profiles Jason Priem, a graduate student in library sciences at the University of North Carolina at Chapel Hill. He helped develop Total-Impact, an altmetrics site that tracks information as it is discussed across the web. He discusses the general concept of the site as well as its current limitations (hey, it's still in alpha).
The internet disrupts traditional publishing; we no longer need to fit the scientific record to the dead-tree world of volumes and issues and page numbers. This shifting paradigm is dragging metrics along, potentially crushing IF in the process.
Over at Take as Directed the always-marvelous David Kroll posted an example of a scientific author taking exception to what he said in a post. The author emailed him about "mythology and gross misstatements" in the original post, and he offered to chat by phone about the issues. David asked him to point out factual errors and make his case in the comments, which he refused to do.
The comments house an interesting discussion, but many respondents also feel that it is not worth their time to participate in the comments of a blog post:
I do relate to the author’s comment about hesitant about engaging in blog comments. We as scientists are often told to avoid the comments sections of posts, as they are a quagmire where the time and energy required for engagement vastly outweighs the effectiveness of participating. I am sympathetic with the author’s decision not to engage in that way.
As a moderator of the Science Online session on the resistance to scienceblogging by journals and other established authorities, I am curious about where this impression came from. Did someone actually tell this person not to engage in blog-based discussions? Is this merely a general impression? What is at the root of this resistance?
Please comment below or over at Take as Directed and let me know what you think. If this effort is worth my time, it's worth a bit of your day as well!