The Politics of Ideas : Hauser Gone Wild

Aug 11 2010 Published by under Forget What You've Read!

Well...I had been planning on saving this post for a rainy day, somewhere far, far down the yellow brick road.  And then --as fate would have it-- the Marc Hauser story broke.  All over my windscreen.  For those of you who have somehow missed the train-wreck in progress, these headlines should clue you in:

"Harvard morality researcher investigated for scientific misconduct" (Nature News)

"Investigation of scientist’s work finds evidence of misconduct, prompts retraction by journal" (Boston Globe)

"Harvard U. Finds Evidence of Scientific Misconduct by Professor" (Chronicle of Higher Ed.)

(New Scientist even decided to dub it #Hausergate.  Ha.)

Hauser is -- as I'm sure many of you already know -- a big name in the field and has collaborated with even bigger names : among them, Noam Chomsky, Gary Marcus, Susan Carey, Antonio Damasio, and dozens more.  If you read his publications list, it's a veritable who's who of psychology today.  Which begs the question -- how did no one know this was going on?

It's the wrong question to ask.  Because -- in all honesty -- people must have known, and they must have known for a while.

Harvard, I can assure you, did not want to put one of their own out to dry.  They have one of the best psychology departments in the world and this is the worst press imaginable.  And in all seriousness -- what university wants to lead one of its best and brightest to the gallows, while admitting, shame-faced, to being cuckolded the entire time?  That it took them three years to finally oust him is telling.

All I can say is : it must have been egregious.

I say this because I know of data-faking scandals that have been successfully hushed up in top departments, and of professors who have had journal articles retracted only to go on to successfully win tenure.  (Some departments refuse to 'sell out' their professors, no matter how slipshod their research practices).  Which makes Harvard's decision to turn its back on Hauser all the more -- shocking?  insane?  revelatory?  smacking of ethics, even?

To mark the occasion, I've decided to relay some stories that may make you, well, a little less surprised at Hauser.  And maybe a little more surprised with Harvard (in a good way?)  While there are many possible stories to relay here, I've decided to focus today on the journal review process, and how combative politics can come to interfere with good science and ethical decision making in review.  It should give you good reason to suspect that Harvard wasn't the first on the scent...

The Politics of Ideas : On Journals

The Editor

First, a note about journal editors.  At top-notch, high impact journals, the power that editors have in shaping which theoretical work gets reported – or no – is massive.  In psychology, where there are any number of competing theories to explain the same phenomena, the particular leanings of the editor can make a considerable difference.

Indeed, if an editor has a vested interest in the outcome of the review process and wants to exercise (undue) influence, she has a number of options.  She can a) send the paper to people she knows will strongly favor acceptance, b) reject it out of hand, or c) significantly hold up the publication process, by either sitting on the paper as long as possible, or by sending it to unfavorable reviewers.

These are not the actions of ethical editors.  But not all editors act ethically.  Here are a handful of horror stories that illustrate how editor politiking can interfere with science.

The Delayer

A couple of years ago, an acquaintance of mine – S. – submitted his thesis work to a well-regarded psychology journal.  Although S. didn’t know this at the time, the action editor (AE) who received the paper had a vested interest in not seeing it published.  Indeed, S.’s work threatened some of the theoretical claims AE was preparing to make in several upcoming papers.  But instead of simply rejecting the paper – which had obvious merits and would clearly be high impact – AE decided to do something altogether more clever: substantially delay the publication process.

First, AE solicited reviews.  The two reviews he received on the paper were uniformly positive but both asked for revision and for more experiments to shore up some of the claims that were made.  This could have been dealt with fairly easily in revision, if S had received the reviews in a timely fashion.  But instead of sending back the reviews, AE claimed that he could not make a decision based on the received reviews and that he was still trying to find a third reviewer for the paper.  This went on for a year or more, and S. became increasingly unsure about how to proceed.  On the one hand, AE – who is rather famous in our field – appeared to be trustworthy, and sent letters assuring S. that he was still valiantly searching for a third reviewer.  On the other hand, time continued to pass, and no reviews were returned.

But then, it got worse.  At the end of that year, AE ended his term with the journal, but explained to the other editors that he would specifically like to see this paper through the rest of review.  When S tried to follow-up with the journal about what had happened with his paper, no one was sure.  AE had long since stopped responding to emails.  While in retrospect, S. probably should have pulled the paper from the journal and resubmitted elsewhere, he was a relatively young researcher and was loath to start the process over after so much time had already passed.

So what was the outcome?  Some two years after the paper was first received, the new head of the journal intervened, and – with sincere apologies and some embarrassment – returned the paper and the positive reviews.  But by this stage, years had elapsed, and the work – which contained sophisticated and innovative modeling techniques – was no longer considered groundbreaking.  Not only that, but AE had rather liberally borrowed from its literature reviews in his own work.  In the meantime, S’s promising young career and his confidence had been destroyed.

The paper – which is, to my mind, still a brilliant piece of research – remains unpublished.  S. no long works in academia.

Whither the reviewer?

The story of S. is tragic. But you may be wondering -- what's it got to do with Hauser?  Ah -- but see, the politics of the review process don’t always work against the author.  Sometimes, they work against the reviewer, and against the interests of science, more generally.  Here are two such stories I’ve been told.

#1 In the first, a friend – J. – was asked to review a paper that criticized some rather provocative work she had done early on in her career, which had challenged the theoretical status quo. When she reviewed the paper, she found substantial flaws in both the researchers’ methods and in their statistics.  Redoing their stats, she found that none of the claims they made in the paper were substantiated by the numbers they were reporting.  In fact, far from disproving her earlier work, the researchers had replicated itOn the basis of J’s review, the journal formally rejected the paper.  But then, under pressure presumably – the head of the journal reversed the decision and accepted it, under a new title and a different action editor. It was published soon thereafter, faulty stats in place.  It was only years later that J. received a formal apology from the new head of the journal, who told her what had happened.  Not surprisingly, one of the authors on the paper had called in a favor with the head editor.

#2 In another, similar tale, a researcher, R., was looking over a much-discussed conference paper, that was then in press as a journal article.  R. quickly realized that the data the authors were reporting did not support their statistics.  In fact, the researchers had made a serious error in analyzing their data that – when corrected for – led to a null result.  R. contacted the lead author to notify him.  When the article was published as a journal article several weeks later, the data reported had mysteriously “changed” to match the statistics.

...I know a virtual laundry list of similar stories. It seems clear to me that, even at top journals, the editorial process is not without missteps, oversights, and the occasional ‘inside job.’ In some ways, given how overworked and underpaid the editors are -- how could it not be?  Of course, I strongly doubt that this is business as usual at these journals. But it's undeniable that these incidents do happen.  And it's why I find the Hauser 'scandal' relatively unsurprising (or surprising only insofar as he was eventually caught).  Because, think of it this way : who in psychology was better placed than Marc Hauser to call in the occasional favor?  To ensure editors chose chummy reviewers?  To fax over a last minute 'revision' to his data set?  The man is undoubtedly charismatic -- he's a well-known public intellectual and wildly popular as a professor, and he has one of the most enviable lists of collaborators and co-investigators in the field.  Regardless of the quality of his output --or the reality of it-- those three factors -- charm, popularity, and friends in high places, would have made him an immensely powerful figure in the field.  (If you think academia is so different from the Baltimore City Police Dept, think again!)

In other ways though -- regardless of corrupting influences -- it’s not wholly surprising that rotten statistics are going to press unvetted.  From what I can make out, it's the rare conscientious reviewer who double-checks a paper's data against its stats.  (--Which is just bad news for science, frankly.)

Each year, the professor I work with, Michael, teaches his students a valuable lesson in this, when he shows them firsthand why it pays to keep abreast of the numbers.  Michael opens the day's lecture by having the students debate the merits of a famous paper on sex-differences and their apparent relationship to autism.  The debate will rage fast and furious, until Michael will stop to ask if anyone has looked closely at the statistics reported in the paper.  Almost invariably, no one has.  Over the next five minutes, Michael will illustrate why the statistics in the paper are impossible given the data that the authors report.  The debate ends there.  Without any real results, what’s there to argue about?

I’ve thought about this a lot in the years since I first sat through that class, and particularly after I heard some of the stories recounted above.  And it’s been surprising to me, the number of times that I’ve come upon papers – famous and well cited ones, at that – that have made a royal mess of what they’re reporting.  Either the statistics don’t match the data, or the claims don’t match the statistics.  This is more than a little unnerving.  When foundational papers, which make important and widely-believed claims, aren’t showing what they purport to, then what can we trust?  Who can we cite?  What should we believe?

And furthermore: How does this happen? Isn’t the review process in place to ensure for methodological rigor, statistical accuracy, and supportable claims?

Absolutely -- in theory.  With a good, not-too over worked editor, and ethical, earnest reviewers, that will be the outcome.  But from what I can square, unfortunately that’s not how the review process in psychology always works.  Far too often, the politics of ideas disrupts the honorable pursuit of science.

[...There's more where that came from.  This is the first in an occasional series on the politics of ideas in psychology.]

[Having had my say, I would recommend you also read a very different perspective written by Art Markman: sometime limericist, consummate gentleman and head editor of Cognitive Science.]

[And finally, as an obvious addendum, I want to make clear that there are many, many ethical and hard-working editors and reviewers in this field, some of whom I have had the great pleasure of working with. That there are politics at play is evident. That there are many scientists far more interested in the pursuit of ideas than in the pursuit of power, is doubly so.]

42 responses so far

  • Kinda depressing. We can just hope that this doesn't happen too often. I'm curious about the example Mike uses in his class. Is it possible to tell us a little more about how the stats aren't possible with the data (even if you can't point out the paper)? As a researcher, I'd like to know.

  • BTW, still really enjoying your blog. You're a gifted writer.

  • Excellent post, and although I'd prefer an innocent-until-proven-guilty handling of Hauser, I can't fault your analysis.

    As regards the peer-review process, the horror stories you've reported are similar to what I've heard myself.

    Personally, I blame the 'publish-or-die'/citation industry. There's too much emphasis on banging out papers, which creates an overly-competitive and underly-reflective atmosphere.

    I mean, how many original ideas does a researcher have in their career? I'd say four, tops.

  • I think your analysis is of value... we all need to think about what's happening in our labs and our journals. (I'm an anthropologist, but the point is overarching.) I do ALSO think that this part is unfair, because far too speculative:

    >>Because, think of it this way : who in psychology was better placed than Marc Hauser to call in the occasional favor? To ensure editors chose chummy reviewers? To fax over a last minute ‘revision’ to his data set?

    Do we really need this kind of guessing, before we know what's gone on? As a primatologist I've learned a great deal from Hauser's work. I'm interested in finding out what went wrong, (and what I have to unthink that I already learned?), but I'm not interested in casting aspersions without evidence.

    • melodye says:

      @Barbara (and @ similar comments below) I do appreciate that that line is --in all fairness-- wildly speculative. I do not know the precise details; nor will I likely ever, as Harvard is bound to conceal the details of its investigation. But I'm not casting about without evidence. The fact of the matter is : the guy's been caught. There's no shallow end to that. Given the strength of his position, the fact that he has been subjected to a three-year long investigation and put on leave from Harvard is quite astounding. It is enough to know that a top department would not do that to one of their own without overwhelming evidence (or, possibly, a whistleblower). I find it supremely unlikely that the decision was made on the basis of either the single Cognition paper, 'unintentional' misanalysis of data, or poor record keeping. At an institution like Harvard, you don't hang out your own to dry without fabulously compelling evidence. There's far too much at stake.

      • I agree with your paragraph here in reply to me, except... I was objecting only to the lines I specifically quoted. Others may have objected more broadly, but my comment was very specifically geared to the speculation I quoted.

        Just to be clear. Thanks for taking the time to reply.

  • Jules says:

    The Wire comparison is quite apt.

    What I always heard about him is that he held back papers from his postdoc days to make sure his publication record as a jr prof would get him tenure. Making him the only(?) person to ever get tenure as a jr professor in that department.

  • John Hawks says:

    What a great post, thanks Melody! It really is astounding that the inquiry produced this result; it's also astounding that there isn't more public knowledge of what happened. Most similar inquiries seem designed to cover up or downplay misconduct, even where there is a clear public record pointing to it.

  • Art Markman says:

    Nice perspective on this issue. Thanks for linking to my (somewhat different) take on the situation.

    In the end, I guess I'm just impressed at how well the system works in the end. There are a large number of journals out there, and they permit a diversity of opinion to get through. There are definitely frustrating individual cases, but I'd say that in the long run justice is often done.

    • melodye says:

      Art -- I like your view on the issue, and think that in many ways, you are in a much better position to have a balanced perspective than I am -- being at once a researcher, an editor, and a reviewer, and having had a much longer run in the field than I have. Which is why I linked to you. If I'm going to do 'fair and balanced' (haha, hopefully not quite in the style of Fox News...) I wanted to have a different and fresh perspective to share.

      And you're right -- your insight into the process is somewhat (rather than 'very') different from my own. I would like to believe that science can be self-correcting too. I think there are political factors that are obstructions to good science --and often as not facilitate bad science-- which I would like to see removed. But I certainly appreciate both your optimism and your experience in this.

  • Tom D says:

    Out of curiosity, while the various journals do not want to publish bad data, and while it will happen occasionally, is there any pressure that can be made by 'the audience' to help prevent these errors? I mean, if I paid *that much* for a subscription to a journal and as an audience member found that (made up here) 1 in 5 papers had serious basic math errors or other basic errors found in the paper, I'd ask for 1/5 of my money back. Hitting these journals in the pocket (as well as the blow to esteem - which is how these places thrive) may help cut down on the politics of bad papers getting published.

    Admittedly, this won't help the good papers get published when they are unpopular for whatever reason, but it seems to be a start.

  • Jason G. Goldman says:

    All I can say is : it must have been egregious.

    I think, with all due respect to my co-blogger, that you're being far far far too speculative.

    There is just no reason to believe - given the limited DATA available to us, that any wrong has been done or data has been faked. That scientists, who try to live and die by the data, are ready to reject the null hypothesis in this matter on the basis of correlation and anecdote is, frankly, unsettling. We routinely slam the mainstream media for *getting it wrong* when reporting on science. Yet we're prepared to accept certain details of the story in this case? I think it's irresponsible to participate in this sort of name-calling. At least not yet, until the dust settles, and it becomes clear what the data are.

    David Dobbs and Drugmonkey have far more even-handed takes on this.

  • SV Shepherd says:

    What do you propose could fix the situation? I see two different points here: First, that there's a lot of politicking in science associated with publications. That seems to be a cultural issue for which there is no easy-to-implement policy solution. Second, the lack of statistical rigor in the review process seems fairly straightforward to address. Journals could either (a) hire a statistics editor (b) specifically request statistical checks in their review process. Relatedly, I'd like it very much if it became the norm to publish analysis code and perhaps even raw data alongside the paper.

    Why doesn't this happen? My hunch is that it opens the authors up to more scrutiny, which is uncomfortable, and so they don't volunteer it. Fearing they'd be offered fewer papers, the journals don't mandate it. And since nobody factors this in when evaluating publication record, and since replications (or failures-to-replicate) are so rarely published, there's no consistency check. You publish, you get credit, regardless of how good the papers were.

  • DrugMonkey says:

    I would also like to hear more about the stats not matching the data. What I would especially like distinguished is whether it is a matter of underlying assumptions that dictate which stats are run, failure to control for multiple comparisons or something else.

    The reason being that people are usually full of all sorts of fixed ideas about what is the "proper" analysis for a research study. The real world is much messier and the fact of the matter is that every selection of statistical approaches comes with limitations and assumptions. Ultimately the statistical analysis of research data is based on the idea that we do not EVER know what the character of the population of interest really is, in omnipotent Truth. this is why they are called inferential statistics.

    • melodye says:

      @DrugMonkey I appreciate your point, but given the sheer length of the investigation, I'm sure Harvard made d*mn sure they were doing the right thing. Unfortunately, my bet is that they're unlikely to ever release the details. Why would they? By stonewalling the media, they all but ensure that this story dies on the vine. What I am (somewhat) surprised by, is the fact that the media hasn't solicited comments from Steve Pinker yet. It's usually the case that Pinker is the go-to talking head for this kind of story. That he isn't quoted on this by the Globe or any other outlet, makes me wonder whether Hauser's colleagues aren't under some sort of gag-rule. That would make sense given the likely extent of the investigation, although we'll see how it plays out over the next couple of days..

      • JP says:

        Given that Marc was one of the people very influential in orchestrating Steve's move from MIT to Harvard (after which Steve had been pining for years), I'm not surprised Steve's not commenting about the fate of his pal.

      • Jason says:

        I've seen a short comment by Steven Pinker about this somewhere.... at the NYT perhaps? It is very boilerplate.

    • I'd also like to know more about how a reviewer could know that the stats weren't done right, since reviewers don't have access to the raw data. Are you saying that the F-value and p-value don't line up? That the number of degrees of freedom is incorrect? That, as drugmonkey asked, no corrections for multiple comparisons were run? That the data was binary but an ANOVA was run anyway?

      • melodye says:

        In answer to your question: it could be any number of things. There's usually a couple ways to get your stats right, and many, many ways to get them wrong. And as a reviewer, you don't necessarily need raw data to know that the stats are bunk, though obviously it would be extremely helpful. It's enough to know, for example, that a researcher did the wrong tests for their data; or that the numbers they provide couldn't possibly give them the statistics they report. (Sometimes the numbers really are that off).

        In a recent study I was looking at, the researchers had -- in an earlier paper -- gone into great detail on why you shouldn't use a particular proportional measure for interpreting the results they got in a task (before it produced a highly misleading comparison between groups). Yet in that particular study, they failed to get interesting results from any other analysis and so used it anyway, to establish the one 'credible' finding in the paper. From my perspective, that amounts to data mining... Bizarrely, the researchers cited the previous paper as a means to establish the credibility of this method. I'm guessing no one else actually went back to read the original work?

  • hazinsideinfo says:

    I’d prefer an innocent-until-proven-guilty handling of Hauser

    Did you miss the part about the three-year-long investigation? Has it been reported yet that Hauser hired Alan Dershowitz as his lawyer?
    I can assure you that he has, in fact, been proven guilty.

    There is just no reason to believe – given the limited DATA available to us, that any wrong has been done or data has been faked.

    While that's a commendable attitude, it's also naive and wrong in this case. Hauser faked data, repeatedly.

    • tom says:

      What exactly is the evidence that Hauser faked data? It appears that the two non-retraction problems involved lost field notes/recordings, and that Hauser successfully replicated the experiments with incomplete documentation. That doesn't sound very shocking to me. I don't think that there is any lab that never lost any documentation. Nobody knows what the retraction is about, but it might be something similar, or something else, nobody knows. And since you seem to be so sure that people must have "known", what exactly must they have known? That Hauser lost field notes?

      It's surprising that a self proclaimed "science blog" can't see beyond the tabloid style reporting of the Boston globe. Just make sure that evidence doesn't get in the way of your judgments, way to go for a "scientist"

      • melodye says:

        The APA (gold standard) guideline for maintaining physical records is three years. Yet Hauser is retracting a paper from 2002. That makes the paper itself over seven years old, and the data likely much older still. I leave it to readers to determine whether the "lost or incomplete documentation" scenario seems likely.

        • tom says:

          In other words, you don’t have any idea about why the paper has been retracted, but you are very happy to indulge in what might well turn out to be slander. Congratulations for your high ethical standards.

          • tom says:

            Also, it’s not unlikely at all that data retention problems might have been the cause of the retraction. In fact, if no data had been retained at all, it’s hard to see why anything would have been retracted in the first place; you can’t locate problems if you don’t have access to the data. So either there was incomplete data, or there was a problem with the data. Or something else. But nobody knows.

            Just for the record, based on your suspicions that somebody like Hauser might have an easier time publishing, I assume that you have never worked with a big name like him. I have worked with several scientists in a league with Hauser (though not with Hauser himself), and I can tell you that this makes it more *difficult* to publish. There is always a reviewer who disagrees with something the big shot said years ago even if it’s completely irrelevant to the paper, or a reviewer who just doesn’t like your adviser and tries to kill your paper. I never had such problems when publishing on my own. This might be outrageous, but what about this: big names are big names because they tend do good stuff.

      • melodye says:

        Tom -- I'm working with way more data points than you know. Check out the recent NY Times news coverage, quoting Mike Tomasello (1, 2).

        • Hidden data points says:

          My sense is that some of the less-friendly responses you received resulted because your original post didn't make clear that you had inside information about the Hauser case. It originally sounded as if you were making inferences from the original news articles combined with knowledge of previous cases (i.e. inferences like "it must have been egregious"), when really you were reporting inside information, that had not been posted anywhere on the web at that time.

    • John says:

      Dershowitz will be Hauser's attorney? You were joking, right? The big-time Chomsky baiter Dershowitz will defend Hauser the two-time co-author with Noam Chomsky? But that was all a joke, wasn't it?

  • Janet D. Stemwedel says:

    Ping!

  • physioprof says:

    Publishing "stories" like that you've "heard" but with no corroboration is nothing but gossip-mongering.

  • Yannis Guerra says:

    The fact that he was such a prominent investigator of morality and that he was actually writing a book about "being evil", I wonder...
    What if it is all a publicity stunt? Or research into the reaction of media/blogs to such a low hanging story?
    Imagine if in 6 months he comes out with the book, and at the same time Harvard comes with a public statement that all of this was a social experiment on how we, as a society, react to a situation like this? He may even consider discussing the benefits of having done it, as it puts forward the discussion of the defects of the system by the members of the science media, who normally would be using their time discussing the science results and not the process itself.
    Nah. This would require perfect timing and a strong sense of risk/benefit. Crazy fun conspiracy idea that can't be true...
    Or is it?

  • Jarrett says:

    Do you think these problems also plague nutritional science?

  • Norval Smith says:

    "That there are politics at play is evident. That there are many scientists far more interested in the pursuit of ideas than in the pursuit of power, is doubly so."

    Unfortunately not being interested in the pursuit of power is not a necessary pre-condition for reaching positions of power, while being interested in the pursuit of power certainly helps. After forty years as a researcher I am certainly aware of people in positions of power who have not earned these positions by their brilliant research work.

    Although you don't mention the word "anonymous" in your criticisms of peer-reviewing, this affords additional opportunities for reviewers with hidden agendas to delay publication, or even destroy careers.

    Norval Smith
    University of Amsterdam

  • Mind the Gap says:

    @Tom..."I don’t think that there is any lab that never lost any documentation... "
    Wow. No wonder the public cannot trust scientists.

    They should admit that they (and their teams) are fallible humans, sometimes lazy, messy, disorganized, and unethical when it is expedient. I have no problem with that. What I want is for scientists to shut up when given rules, regulations, report requirements, and other oversights. We do not "trust" them with our health & safety. Like all humans, they need motivation to follow the rules.

    For a group that touts skepticism, they sure don't like it when the light is shining on them & their labs, data, teaching methods, or pronouncements.

  • My own feeling is that the academic journal/peer review process is mired in the very real limitations of a system that was developed in another age.

    The sheer breadth and speed at which technology has transformed information has outrun the ability of the standard hierarchy to transpose it. At the same time, that same technology is providing opportunities and platforms for truly rigorous review, publication, and dissemination of knowledge that were unavailable to those who originally framed the system centuries ago.

    It seems odd to me that here in the age of the internet, hard work and brilliance are still subject to the systemic and unavoidable gate-keeping foibles of another, long past age.

  • Lay Professor says:

    This is a response to Hauser's retracted paper on David Dobbs' blog. I hope I am not violating any convention by reposting it. I think it addresses a number of the comments here regarding whether the retracted paper is evidence of data fabrication. At this point I think it is evidence of sloppy research combined with sloppy reviewing.

    I am actually much more suspicious of the "redone" studies. The parts of these studies that went missing are videotapes of free-ranging rhesus monkeys. It is impressive that years after the original data were collected a team can go back to Cayo Santiago, videotape monkeys and get results that are the same as those from the missing data. I would certainly love to see the raw data. I find this incredible, if not uncredible.

    Here's my take on the retracted Cognition paper.

    -------------------------------------------------
    Harvard’s stated reason for asking Cognition to retract the 2002 cognition paper was that the faculty committee “found that the data do not support the reported findings.” This might mean that the data were fabricated, cherry-picked, or don’t exist. However, there is a less nefarious explanation for Harvard’s conclusion that comes from reading the published paper. Shockingly, the data, such as it is, in the published paper does not support the conclusion. I suspect that few have read the retracted paper, but suspect that if they did they would be astonished that it was ever published. The total experimental data consists of a single data point evaluated with a chi-square test.

    The basic claim of the paper is that monkeys who have habituated to an auditory stimulus of the form ABB, pay no attention to a novel ABB stimulus, such as XYY, but do attend to a novel order of the same syllables, such as XXY. The only data presented in the paper are the number of animals who respond (or show no response) when the test stimulus is the same as the habituation stimulus and the number who respond when the tesdt stimulus is different from the habituation stimulus.

    Here is the complete statement of experimental results from the paper:
    “When presented with the two test trials, however, subjects were more likely to respond by orienting toward the speaker when the pattern changed from the habituation series than when it stayed the same (Fig. 2; x2=5.60, df=1, P <0.02)."

    Forget for a moment that the Chi-square is not appropriate (the same animals are in both experimental conditions and Chi-square assumes independent groups). The more serious issue is that there was no statistical comparison of the number of animals responding to the same condition and those responding to the different condition. The claim is that in the same condition no one should respond (they are habituated to exactly this sort of test stimulus) and everyone should respond to the "different" stimulus. Instead what the data figure shows is that 43% of the monkeys respond to the "Same" stimulus, whereas 86% respond in the "different" condition. While this appears to be a large difference, with 14 subjects per group this is not statistically reliable. In other words the results of the study do not support the conclusions possibly because the key contrast is not significantly different.

    This paper appears to suffer from sloppy reviewing. One can reasonably ask why it ever was published. It is a lightweight study in which the reviewer's didn't even demand that the limited data be fully analyzed, using appropriate statistics (G-test for example, which doesn't require independent groups). For example, Hauser could have reported the number of his 14 subjects who did not respond to the same condition, but responded to the different condition. In an ideal world and if Hauser's hypothesis is correct, this should be 14 subjects. Chance probability of having those two outcomes is 0.25, so there should be 3-4 subjects that show the pattern. There is enough power in this study to detect that difference as statistically reliable. Why was this comparison not required for publication.

    As it stands this is a heavy weight conclusion based on decidedly light weight data. However, the Harvard statement doesn't necessarily reflect fraudulent data, it could simply be an accurate evaluation of this paper. Maybe that is why Cognition has not commented on its retraction. It is an embarrassment to all involved.

    • melodye says:

      "incredible if not uncredible" ...ha! I like it.

      If Hauser can get away with one retraction, he'll quietly return to his career as an academic superstar next year. I think that the retraction at Cognition is so much smoke screening. As you said, it seems to be the least worrisome of his papers.

      This is somewhat speculative, but I would imagine that the kinds of problems Dobbs mentions above are endemic to Hauser papers. You can tell what the primatology community thinks about Hauser by reading Tomasello's quotes about him in the NY Times and the Globe. That attitude has been in place for over a decade, from what I know. Yet he clearly got a free pass on the Cognition paper (or else, why would the stats have been so sh*t)? I can't comment on that particular journal article, because frankly, I don't know the specifics there; but this is precisely the sort of thing that happens when you consistently stack the decks in review.

  • Uncle Al says:

    In 1965 US President Lyndon Johnson made the world right with academic rigor (re Robert Strange McNamara in Defense) lubricated by unlimited pilferage of the productive and impressd as jackbooted State compassion - "The Great Society." 45 years and a few $gazillion later, we perceive that a congenitally blind man throwing unfletched darts could have hit more targets. The University of Michigan has more than 90 "diversity" programs,

    http://www.diversity.umich.edu/programs/
    Gentle Reader is exhorted to find even one U of M Gifted program.

    Psychology is economics' "HETEROSKEDASTICITY!" without economics' accountability. The Harvard Department of Psychology reached its apotheosis with 64 Homer Street, Newton, MA 02458.

    Mediocrity is a vice of the doomed.

  • Madhu says:

    Hi Melodye,

    I just wanted to let you know that I have included this post in the latest Scientia Pro Publica carnival over on my blog. Do drop by when you have a moment.

    cheers,
    Madhu

  • wmf says:

    "It must have been egregious". Egregious. You must be joking. Give Hauser a break.

    Egregious is threatening your secretary with physical harm and making her order your underpants and pay your personal income taxes. Egregious is sending every peer review request received in your mailbox to junior research staff (that don't even have an M.S.) and then signing your name to them, thus preventing the vast number of important papers from being published because the actual unnamed reviewer probably hadn't even taken an epidemiology course yet. Egregious is saying that half the human population is incapable of basic quantitative methods.

    I hope Hauser goes on. And gets more famous. Better yet, I hope he goes on Saturday Night Live. And takes a puppy with him.

    Chin up Hauser. There are a lot of people who are withholding judgment.

  • Juan says:

    I think that this affair calls for a revolution. Science has been corrupted by bureaucracy when they started influencing academia using the impact factors, the H-factors and all the paraphernalia of factors and quotients to measure the importance of research and the ensuing consequences on contracts and grants. I am not saying that this instructs researchers to do all the dirty tricks you denounce; it most probably selects for them.
    We should rise and attack; who are those politicians, bureaucrats and administrators to tell us how to do science? Where are their credentials? If you want science let us do it respecting our norms. I like Markman statement; if we are going to be saved is by science and the replication ideal, not by bureaucrats. So get up, stand up.

  • James Contos says:

    Dear Melodye,
    Why so hush-hush about revealing names? I'm preparing a book about irreproducible results, fraud, and the consequences for various parties - based primarily on my experience of 5 years postdoc gone awry since all my experiments were based on falsified research (add to that more suffering from whistleblowing):

    http://www.nytimes.com/2008/03/07/science/07retractw.html
    http://www.nytimes.com/2010/09/24/science/24retraction.html

    Sounds like you have some good examples that would be instructive for others to know about. You could mention names but still remain anonymous. I invite you to contribute to my book "Nobel Fraud". I could use some good examples from psychology.
    Rocky (rocky@sierrarios.org)

Bad Behavior has blocked 711 access attempts in the last 7 days.