Well...I had been planning on saving this post for a rainy day, somewhere far, far down the yellow brick road. And then --as fate would have it-- the Marc Hauser story broke. All over my windscreen. For those of you who have somehow missed the train-wreck in progress, these headlines should clue you in:
"Harvard morality researcher investigated for scientific misconduct" (Nature News)
"Investigation of scientist’s work finds evidence of misconduct, prompts retraction by journal" (Boston Globe)
"Harvard U. Finds Evidence of Scientific Misconduct by Professor" (Chronicle of Higher Ed.)
(New Scientist even decided to dub it #Hausergate. Ha.)
Hauser is -- as I'm sure many of you already know -- a big name in the field and has collaborated with even bigger names : among them, Noam Chomsky, Gary Marcus, Susan Carey, Antonio Damasio, and dozens more. If you read his publications list, it's a veritable who's who of psychology today. Which begs the question -- how did no one know this was going on?
It's the wrong question to ask. Because -- in all honesty -- people must have known, and they must have known for a while.
Harvard, I can assure you, did not want to put one of their own out to dry. They have one of the best psychology departments in the world and this is the worst press imaginable. And in all seriousness -- what university wants to lead one of its best and brightest to the gallows, while admitting, shame-faced, to being cuckolded the entire time? That it took them three years to finally oust him is telling.
All I can say is : it must have been egregious.
I say this because I know of data-faking scandals that have been successfully hushed up in top departments, and of professors who have had journal articles retracted only to go on to successfully win tenure. (Some departments refuse to 'sell out' their professors, no matter how slipshod their research practices). Which makes Harvard's decision to turn its back on Hauser all the more -- shocking? insane? revelatory? smacking of ethics, even?
To mark the occasion, I've decided to relay some stories that may make you, well, a little less surprised at Hauser. And maybe a little more surprised with Harvard (in a good way?) While there are many possible stories to relay here, I've decided to focus today on the journal review process, and how combative politics can come to interfere with good science and ethical decision making in review. It should give you good reason to suspect that Harvard wasn't the first on the scent...
The Politics of Ideas : On Journals
First, a note about journal editors. At top-notch, high impact journals, the power that editors have in shaping which theoretical work gets reported – or no – is massive. In psychology, where there are any number of competing theories to explain the same phenomena, the particular leanings of the editor can make a considerable difference.
Indeed, if an editor has a vested interest in the outcome of the review process and wants to exercise (undue) influence, she has a number of options. She can a) send the paper to people she knows will strongly favor acceptance, b) reject it out of hand, or c) significantly hold up the publication process, by either sitting on the paper as long as possible, or by sending it to unfavorable reviewers.
These are not the actions of ethical editors. But not all editors act ethically. Here are a handful of horror stories that illustrate how editor politiking can interfere with science.
A couple of years ago, an acquaintance of mine – S. – submitted his thesis work to a well-regarded psychology journal. Although S. didn’t know this at the time, the action editor (AE) who received the paper had a vested interest in not seeing it published. Indeed, S.’s work threatened some of the theoretical claims AE was preparing to make in several upcoming papers. But instead of simply rejecting the paper – which had obvious merits and would clearly be high impact – AE decided to do something altogether more clever: substantially delay the publication process.
First, AE solicited reviews. The two reviews he received on the paper were uniformly positive but both asked for revision and for more experiments to shore up some of the claims that were made. This could have been dealt with fairly easily in revision, if S had received the reviews in a timely fashion. But instead of sending back the reviews, AE claimed that he could not make a decision based on the received reviews and that he was still trying to find a third reviewer for the paper. This went on for a year or more, and S. became increasingly unsure about how to proceed. On the one hand, AE – who is rather famous in our field – appeared to be trustworthy, and sent letters assuring S. that he was still valiantly searching for a third reviewer. On the other hand, time continued to pass, and no reviews were returned.
But then, it got worse. At the end of that year, AE ended his term with the journal, but explained to the other editors that he would specifically like to see this paper through the rest of review. When S tried to follow-up with the journal about what had happened with his paper, no one was sure. AE had long since stopped responding to emails. While in retrospect, S. probably should have pulled the paper from the journal and resubmitted elsewhere, he was a relatively young researcher and was loath to start the process over after so much time had already passed.
So what was the outcome? Some two years after the paper was first received, the new head of the journal intervened, and – with sincere apologies and some embarrassment – returned the paper and the positive reviews. But by this stage, years had elapsed, and the work – which contained sophisticated and innovative modeling techniques – was no longer considered groundbreaking. Not only that, but AE had rather liberally borrowed from its literature reviews in his own work. In the meantime, S’s promising young career and his confidence had been destroyed.
The paper – which is, to my mind, still a brilliant piece of research – remains unpublished. S. no long works in academia.
Whither the reviewer?
The story of S. is tragic. But you may be wondering -- what's it got to do with Hauser? Ah -- but see, the politics of the review process don’t always work against the author. Sometimes, they work against the reviewer, and against the interests of science, more generally. Here are two such stories I’ve been told.
#1 In the first, a friend – J. – was asked to review a paper that criticized some rather provocative work she had done early on in her career, which had challenged the theoretical status quo. When she reviewed the paper, she found substantial flaws in both the researchers’ methods and in their statistics. Redoing their stats, she found that none of the claims they made in the paper were substantiated by the numbers they were reporting. In fact, far from disproving her earlier work, the researchers had replicated it. On the basis of J’s review, the journal formally rejected the paper. But then, under pressure presumably – the head of the journal reversed the decision and accepted it, under a new title and a different action editor. It was published soon thereafter, faulty stats in place. It was only years later that J. received a formal apology from the new head of the journal, who told her what had happened. Not surprisingly, one of the authors on the paper had called in a favor with the head editor.
#2 In another, similar tale, a researcher, R., was looking over a much-discussed conference paper, that was then in press as a journal article. R. quickly realized that the data the authors were reporting did not support their statistics. In fact, the researchers had made a serious error in analyzing their data that – when corrected for – led to a null result. R. contacted the lead author to notify him. When the article was published as a journal article several weeks later, the data reported had mysteriously “changed” to match the statistics.
...I know a virtual laundry list of similar stories. It seems clear to me that, even at top journals, the editorial process is not without missteps, oversights, and the occasional ‘inside job.’ In some ways, given how overworked and underpaid the editors are -- how could it not be? Of course, I strongly doubt that this is business as usual at these journals. But it's undeniable that these incidents do happen. And it's why I find the Hauser 'scandal' relatively unsurprising (or surprising only insofar as he was eventually caught). Because, think of it this way : who in psychology was better placed than Marc Hauser to call in the occasional favor? To ensure editors chose chummy reviewers? To fax over a last minute 'revision' to his data set? The man is undoubtedly charismatic -- he's a well-known public intellectual and wildly popular as a professor, and he has one of the most enviable lists of collaborators and co-investigators in the field. Regardless of the quality of his output --or the reality of it-- those three factors -- charm, popularity, and friends in high places, would have made him an immensely powerful figure in the field. (If you think academia is so different from the Baltimore City Police Dept, think again!)
In other ways though -- regardless of corrupting influences -- it’s not wholly surprising that rotten statistics are going to press unvetted. From what I can make out, it's the rare conscientious reviewer who double-checks a paper's data against its stats. (--Which is just bad news for science, frankly.)
Each year, the professor I work with, Michael, teaches his students a valuable lesson in this, when he shows them firsthand why it pays to keep abreast of the numbers. Michael opens the day's lecture by having the students debate the merits of a famous paper on sex-differences and their apparent relationship to autism. The debate will rage fast and furious, until Michael will stop to ask if anyone has looked closely at the statistics reported in the paper. Almost invariably, no one has. Over the next five minutes, Michael will illustrate why the statistics in the paper are impossible given the data that the authors report. The debate ends there. Without any real results, what’s there to argue about?
I’ve thought about this a lot in the years since I first sat through that class, and particularly after I heard some of the stories recounted above. And it’s been surprising to me, the number of times that I’ve come upon papers – famous and well cited ones, at that – that have made a royal mess of what they’re reporting. Either the statistics don’t match the data, or the claims don’t match the statistics. This is more than a little unnerving. When foundational papers, which make important and widely-believed claims, aren’t showing what they purport to, then what can we trust? Who can we cite? What should we believe?
And furthermore: How does this happen? Isn’t the review process in place to ensure for methodological rigor, statistical accuracy, and supportable claims?
Absolutely -- in theory. With a good, not-too over worked editor, and ethical, earnest reviewers, that will be the outcome. But from what I can square, unfortunately that’s not how the review process in psychology always works. Far too often, the politics of ideas disrupts the honorable pursuit of science.
[...There's more where that came from. This is the first in an occasional series on the politics of ideas in psychology.]
[Having had my say, I would recommend you also read a very different perspective written by Art Markman: sometime limericist, consummate gentleman and head editor of Cognitive Science.]
[And finally, as an obvious addendum, I want to make clear that there are many, many ethical and hard-working editors and reviewers in this field, some of whom I have had the great pleasure of working with. That there are politics at play is evident. That there are many scientists far more interested in the pursuit of ideas than in the pursuit of power, is doubly so.]