I don’t think there’s a replication crisis

Mar 30 2012 Published by under Uncategorized

Everyone seems to be abuzz with the crisis in replicability, which feels like a new thing. I don't think it’s new or a crisis. In my small corners of the research world, problems with replication have been common since, let’s say, the mid 1980’s, and the field is still doing fine.

When I became interested in the replication problem, 10 years ago, it was not a theoretical issue, it was a practical one. Two close friends of mine in grad school left and began post-docs, brimming with hope and optimism. They first took on the chore of extending their new labs’ recent S/N-worthy discoveries, and then proceeded to waste years not being able to even replicate the original result. Who wants to waste even one year of their life for anything? When I was a youngish grad student, these were my ghost stories, fascinating and horrible. I began to collect these stories in my head. And there were lots of them. I at one point decided that failure to replicate motivating studies was the leading cause of post-doc burnout. I began to trace out a map of what was true and untrue in my field, and which labs were publishing solid stuff, and which labs were producing flashy soft stuff.

I decided that trusting a paper just because it was published was naive. I learned that few people would ever give an unbiased opinion of their own paper. I decided that, as a consumer of scientific data, it was my responsibility to judge for myself. I developed a credo, which was, "If it's in Science or Nature, it's probably wrong." I did not take publication as proof of correctness. This skepticism had practical benefits: when I started my post-doc, I was asked to follow up a project that seemed slightly fishy, so I passed on it. [I still don’t know for sure whether my hunch was right, but I still think it was.]

When I tell people this, many don’t want to hear it. I think a lot of people are attracted to science for the promise of hard solid truth. I think some of those people would be happier teaching the stuff we understand well, and not on the front lines, doing research, dealing with erroneous crap all the time. Other people, when I tell them that I don’t trust a lot of what it published, act like I just told them the sky is blue. I think that one of my biggest responsibilities as a PI is to encourage my students to doubt everything they read, including papers I wrote.

How can you be expected to know if data is legit? It’s not easy, but I think that gaining this sense if one of the most important skills for a scientist to cultivate. Not just reading papers, but critically evaluating them. It’s what a good journal club teaches you. Also, this is one place where networking is really helpful, especially for young scientists, who have more to lose by making the wrong decisions. A lot of people aren't shy about sharing papers they hate! Also if you learn to read between the lines on a paper, you can often see where it's calling bullshit on other ones, then you can judge for yourself. I love the review journals, especially when the authors are subtly catty. [Sometimes, not so subtle.] With practice, I think you just develop a spider sense, especially when you have experience with the method, and have collected that kind of data yourself.

Im not sure it's a crisis. Certainly it doesnt only apply to Stapel/Hauser style psychology, and certainly it goes back at least to the mid 1980's, the period when the papers I carefully read in grad school started to come out. I think that bad results it just what happens when you are out on the front lines, and not symptomatic of some gigantic crisis that's soon to explode when the whole scientific method dies. These things generally get caught as the field advances, inch by inch, over the years. People doing good work eventually get rewarded, but sloppy people often get away with bad work. Sometimes even fraudsters do. That’s just a sucky fact of life.

For the record, I think the reason people are so worked up over cognitive psychology is twofold: (1) Because the normal process of advancement doesn’t shake out bad results on its own very quickly. (2) Because the findings are remarkably non-robust to small deviations in method. As a consequence, it's hard for practitioners to tell bad data based on the spider sense test. And the loop doesn’t close very quickly, and things get stuck out there and believed for years and years. I have no opinion on whether Bargh is right or not, but I do think it's worrisome that he is claiming that tiny deviations in method are so critical. As for Hauser+Stepel, I wonder if these factors make it easier for people to do large amounts of fraudulent work without getting caught. As for the neuroimagers, and the geneticists, I think the fields are in their adolescence, and are actively creating new and better standards. And as for the psychologists, I am incredibly optimistic about the PsychFileDrawer, and I hope it makes truth win out a lot faster. I'm looking forward to seeing the tide go out over the next 10 years, and we can see who is secretly swimming naked.

3 responses so far

Leave a Reply