Over at Crooked Timber, John Quiggin lays into climate scientist Richard Lindzen. His post begins with reasons one might be inclined to take Lindzen's views seriously:
Unlike nearly all "sceptics", he's a real climate scientist who has done significant research on climate change, and, also unlike most of them, there's no evidence that he has a partisan or financial axe to grind.
But then, we find the 2001 Newsweek interview that gives Quiggin reason for pause:
Lindzen clearly relishes the role of naysayer. He'll even expound on how weakly lung cancer is linked to cigarette smoking. He speaks in full, impeccably logical paragraphs, and he punctuates his measured cadences with thoughtful drags on a cigarette.
And Quiggin's response:
Anyone who could draw this conclusion in the light of the evidence, and act on it as Lindzen has done, is clearly useless as a source of advice on any issue involving the analysis of statistical evidence.
I don't want to get into a debate here about climate science (although the neighbors will likely oblige if you ask them nicely), nor even about the proper analysis of statistical evidence. Instead, I'd like to consider whether enjoying being a contrarian (or a consensus-supporter, for that matter) is a potential source of bias against which scientists should guard.
The basic problem is nothing new: what we observe, and how we interpret what we observe, can be influenced by what we expect to see -- and, sometimes, by what we want to see. Obviously, scientists don't always see what they want to see, else people's grad school lab experiences would be deliriously happy rather than soul-crushingly frustrating. But sometimes what there is to see is ambiguous, and the person making the observation has to make a call. And frequently, with a finite set of data, there are multiple conclusions -- not all of them compatible with each other -- that can be drawn.
These are moments when our expectations and our 'druthers might creep in as the tie-breaker.
At the scale of the larger community of science and the body of knowledge it produces, this may not be such a big deal. There are loads of other scientists who are likely to have different expectations and 'druthers. In trying to take someone else's result and use it to build more knowledge, the thought is that something like replication of the earlier result happens, and biases that may have colored the earlier result will be identified and corrected. (Especially since scientists are in competition for scarce goods like jobs, grants, and Nobel Prizes, there's no reason not to identify problems with the existing knowledge base.)
But each scientist would also like, individually, to be as unbiased as possible. One of the advantages of engaging with lots of other scientists, with different biases than your own, is you get better at noticing your own biases and keeping them on a shorter leash -- putting you in a better place to make objective knowledge.
So, what if you discover that you take a lot of pleasure in being a naysayer or contrarian? Is coming to such self-awareness the kind of thing that should make you extra careful in coming to contrarian conclusions about the data? If you actually come to the awareness that you dig being a contrarian, does it put you in a better position to take corrective action than you would if you enjoyed being a contrarian but didn't realize that being contrarian was what was bringing you the enjoyment?
What kind of corrective action do I have in mind? I'm thinking of a kind of scientific buddy-system, matching scientists with contrarian leanings to scientists who are made happier by consensus-supporting. Such a pairing would be useful for each scientist in the pair: Here's the guy you have to convince! After all, one of the things serious scientists are after is a good grip on how things actually are. An explanation that a scientist with different default assumptions than yours can't easily dismiss is an explanation worth taking seriously. If, on the other hand, your "buddy" can dismiss your explanation, it would be good to know why so you can address its weaknesses (or even, if it is warranted, change your conclusions).
Such a buddy-system would probably only be workable with scientists who are serious about intellectual honesty and getting knowledge that is objective as possible. You wouldn't want to be paired with a scientist for whom having an open mind would be at odds with the conditions of his employment.