Answers in the comments section.
Oh that's funny. It's another high-profile attack on impact factors!
I have no desire to defend impact factors. None at all. I accept all the criticisms that have ever been raised about them. I understand the math.
I just feel about them the way Winston Churchill felt about capitalism: they are the worst possible system for measuring impact besides all the others ever invented.
Clicking through to the actual declaration reveals its (almost) worthlessness.
Let's start with the General Recommendation:
This negative advice is useless because it doesn't point to an alternative. That's because the authors don't have a good one. We won't get rid of impact factors until we replace them with something just as easy to use.
This blithely assumes that evaluating scientific content of a paper is easy. It is not. It is very very difficult, even for experts in the field (if it were easy, expert reviewers would agree on manuscripts much more than they do). And grant reviewers and tenure+promotion evaluators are seldom experts. Grant reviewers are incredibly busy people generously donating their time. The Declaration authors are asking them to devote, I would guess, ten times as much time to the reviewing process. There is no way this is going to happen.
In the spirit of experimentation, and because our University Diversity Officer encouraged it, I tried to actually do this for our faculty job search this past year. We had 150 faculty candidates, each of whom listed >5 papers on their CV. To make it tractable, I first decided just to focus on the single highest profile paper on their CV and then I realized to fairly evaluate each paper I would need to read the top 3-4 papers in that field at a minimum. And then I realized that by focusing on the highest profile paper, I had already fallen into the IF trap. Oops! Evaluating scientific content is a full-time job. No one can do it. We must stop pretending they can.
Besides, asking people politely not to use impact factors ignores human psychology. In the absence of an alternative, people will subconsciously use them. Humans are exquisitely susceptible to anchoring. We all have memorized the impact factors of the big journals, and we all assume (usually correctly) journals we haven't heard of have low impact factors. The only people I know who don't have the impact factors memorized are the younger grad students in my dept. And they are rapidly memorizing them the way kids in the 1950's memorized batting averages on their baseball cards. There is no way to forget this information. There is no way to not use this information in evaluating peoples' CVs. People who say they aren't doing this are probably deceiving themselves, as people generally do when they fall prey to anchoring biases.
There are some good recommendations here that will not make much of a difference, like "Be explicit about the criteria used to reach hiring, tenure, and promotion," and "Encourage responsible authorship practices and the provision of information about the specific contributions of each author." Sure. Make everything public. "Be open and transparent by providing data and methods used to calculate all metrics." I thought ISI already did this, but if not, shame on them. They really MUST do this. Also good advice: "Account for the variation in article types (e.g., reviews versus research articles), and in different subject areas when metrics are used, aggregated, or compared." ISI does not do this and they should.
Humorously, despite their hatred of biased measures, they suggest looking at other measures that have equivalent biases. But maybe this is good advice. If we take a bunch of variables and average them together, it might be harder to game this system. Also, it will encourage competition in the impact measuring business, and that ought to motivate impact measurers to develop better measures.
Finally, I admit I am just as guilty as anyone else here. I don't have a better thing to propose. I just get annoyed when people naively pretend the solution is simple. It's like Professor Harold Hill's "Think System" from the Music Man. It's going to take a lot more effort than that to get rid of impact factors than just wishing them away.
My subjective sense is that there has been a profound change in the last 10 years in the socially accepted notions about why some people are gay.
Im not talking about empirical science [about which i know very little], but about what we (leftish academic types) collectively espouse.
When I was in the college (late 90's) it was cretinous and unenlightened to imply that anyone was Born This Way. Sexual orientation was called 'sexual preference' as if you could choose it. It was socially constructed, totally limited to present-day American culture, and words like gay and straight were too confining to express what most people actually were. There was not a doubt in anyone's mind that you could choose to change it at any time and that was your fundamental right. Even str8 frat dudes would place themselves somewhere around 2 and 3 on Kinsey's 7 point sexuality scale. And, if you asked whether Abraham Lincoln was gay, people would look at you as if you were a moron: "The whole notion of homosexuality is a recent invention that didn't exist in the 1800's," I was told by many people.
Nowadays, even though there's no single gay gene, people talk about how they always knew they were gay (or straight). No one seems to have any qualms with Lady Gaga's song, and people seem a lot more comfortable with categories and labels. And everyone knows you can't change it if you want to, even with the help of prayer and psychiatry.
I think this is probably all for the good. It probably helps convince homophobes to have sympathy, and helps conservative religious people to have tolerance. And it gives me permission to ask if Lincoln was gay (my opinion: probably not).
But it's interesting because it's a change in our answer to an empirical question, nature vs nurture. Dichotomous variable vs continuum, without a lot more data. What has driven this change?
I see at least three (non-conflicting) possibilities:
(1) Empirical research (what little of it there is) has trickled down to the masses. Maybe even just on neural determinism, if not on the biological bases of homosexuality.
(2) Rhetorically it works better, w/r/t gay marriage.
(3) [my favorite] The decline of post-modernism in the academy.
You doubt #3? Bear with me a minute. Remember the 90's? Remember how post-modernism was king of everything? Everything in sociaety was constructed and could be deconstructed. Especially homosexuality, which was an obsession of the post-modernists. Like, Michel Foucault (History of Sexuality), de Beauvoir, Lacan and the other post-Freudians, Judith Butler, ... all obsessed with homosexuality. As PM has declined, and with it its influence over our undergraduates, has that played any role in changing attitudes to why people are gay?
I always have bene optimistic about brain machine interface. But after visiting a lab that studies it last week, I am (like the members of that lab) much more skeptical.
This article presents a confusing picture of brain interfaces by going back and forth between invasive and non-invasive measures.
One one hand, people will probably never get brain implants without medical necessity.
On the other hand, there is a very major practical limit on what one can get non-invasively (meaning EEG). Lower bandwidth than, say, blinking and facial muscle movements. It may be possible to improve that, but, according to researchers I met last week who study this problem, it's a very noisy measure, and algorithms may not be able to fix it, because the information simply is not there.
The best brain-machine interfaces are the ones we evolved for that purpose, our voice and hands. People who don't have access to those things will benefit greatly from BMI techniques, but most of us will never find them practical.
Whenever you turn your car off, it sends a ping to your iphone and the phone turns its gears (or whatever) and stores your precise location.
That way, 2 hours later, in the rain, when you leave the mall, your phone knows exactly where you parked.
I have noticed that psychologists are always running around putting the 'psi' symbol everywhere to represent psychology.
I'm not sure why they do this.
I've also occasionally seen philosophers put a 'phi' symbol to abbreviate philosophy.
Anyhow, my question is, why don't neuroscientists put a 'nu' symbol on things?
"Geo-nerd Airplane Helper".
It helps you know where you are when you are in a plane flying over the country.
You tell it where you took off and landed, how far thru the flight you are, and whether you are on the left or right hand side of the plane. Then it tells you what you expect to see.
It also lets you take a picture out of the window and can tell you based on that what you are looking at.
Cause, I spend a lot of time trying to figure out what I am flying over. If I had an app, it would have entertained me basically the whole time, at least until we hit clouds in Chicago.
The thought came to me yesterday on my flight from SFO to ORD. I have a surprisingly large knowledge base about the tributaries of the Missouri, but I confused the Niobrara with the North Fork of the Platte. Embarrassing! I also am still not sure if that was Stockton or Sacramento we flew over.
So, the app needs to have a high resolution map, a list of interesting factoids, a geography recognizer, and useful tips on recognizing different geographical features. (E.g. Interstates don't have at-grade crossings, state highways often do, that kind of thing).
I think you can sell if for $1 and make at least $1000 on it.
There are a lot of awful posters out there and a small number of beautiful ones. We should cherish and honor the best ones, and encourage students to emulate them.
I am a huge fan of Dr Zen and his useful poster advice. I encourage my students and mentees to consult his resources early and often.
The important thing to remember: his advice is not idiosyncratic, but is objectively very good advice, based on sound principles, not on subjective taste.
That's why I was so distressed when I walked by a poster in the halls of Cold Weather Medical School. It was lame and dense with text and uninformative pictures. Then I noticed below it a sign saying it had won an award for best grad student poster the year before.
I attended two poster competitions this year for grad students. I was there when some of the judges came by. I looked at the posters that won. I think (although I don't know for sure) that the judges were impressed by the density of data. They wanted to see lots and lots of information because it correlated with lots of hard work.
I know some of the judges. They are not poster aesthetes. They are regular old professors. (They are probably the suckers who agreed to do it). Many professors are people who actively encourage their own students to make crappy posters.
The best posters emphasize the key points. Poster viewers have very limited bandwidth. The best poster talks are less than 5 minutes. Our judges (at least here) are not rewarding good poster behavior. The winning posters should be the best posters. The densest posters should not be held up as the best.
Poster contests (if they must happen, and I am not convinced they are a good thing), should have separate categories for poster design and poster talks. The judges in the design category should be required to read legitimate good advice about posters.
This morning on Twitter I came across yet another paper pointing out that we all do statistics wrong all the time. I won't link to it because I'm sure you've seen something similar before once or 100 times.
If you are like me, you read the article, feel guilty, resolve to learn the True Nature of Statistics and then forget about it.
Which is surely a mistake. These articles are undoubtedly right. Many of us have inaccurate pictures of that a t-test proves, what can be concluded from a p of 0.05, what alpha really means, why Neyman hated Fischer, why the Bayesians hate the Frequentists, etc etc etc.
And without a doubt science would surely be more accurate if we had a better grasp of basic statistics, instead of learning our math on the street.
So, shame on most of us.
Maybe statistics has already dealt with this problem in a roundabout way. Maybe the standard p=0.05 is more conservative than it should be if we all did our statistics right. If we all avoided all the sleazy things we sometimes do, then either (a) a lot less science would come up true, or (b) we'd have to change p to something less strict. Maybe like 0.15 (or something, that's just a guess) to get the same amount of science produced for the same amount of effort.
A p=0.05 means there is a 5% chance of getting a test statistic one observed (or higher) if the null is true. That level of skepticism seems fair but only because we all grew up with that value. But, you know, it's actually pretty strict. If the goal is to correctly classify results as true or false, it's not good. This criterion will miss a lot of true results. That's fine though, since alpha presumably incorporates a cost function in which the fact that a false positive is more costly than a miss. And if it does that, maybe it also incorporates sleaziness.
It's true that the alpha of 0.05 is kind of a historical accident, but it would be a mistake to conclude that continued use is just inertia. Each generation has decided that the same value works pretty well for them, and continues to use it because it continues to work pretty well.
Anyway, I am not advocating we slack off on learning statistics. And I am not advocating sleaziness. We should all strive to be better, myself included. I am just asking these questions in the frame of mind of a sociologist of science, asking why things are the way they are.
This week I was reminded of the existence of U Mass Dartmouth.
And the need of this presumably august institution to change its name.
You really can't just combine two other famous names and hope no one noticed.
I have considered marketing my soda and calling it "Pepsi Coke". But I didn't.
I have considered marketing a car and calling it "Lexus Audi". But I didn't.
I have considered offering an internet degree at my online only institution, which I am calling "Princeton Penn State."
You can't even abbreviate it as UMD. Because that's another famous college.
I suggest naming it Obama University. Since no one else is using that name yet.