Today's post is the third in a series on the politics of ideas, and examines the current political climate of psychology and cognitive science. In earlier posts, I discussed how certain crooked editorial practices can effectively subvert the review process, and how lack of transparency in review breeds precisely the kind of culture that anonymous-review was designed to undermine. Today, I address the question of why these problems exist in the first place, and explore how changing the culture of review may also change the culture of the field -- for the better. Enclosed are tales of incest, money laundering, and epicycles. Count yourself forewarned. If you would like to skip straight to my practical suggestions, these can be found in the second-to-last section: "Changing the Culture through the Journals."
An Incestuous Lineage : The Politics of Continuity
In psychology, many researchers belong to fairly specific subfields. For example, a researcher may be a particular brand of mathematical modeler or may work on a highly specific question in childhood development. This means that a scientist’s work is frequently evaluated by other people in her camp, who may be highly receptive to her general approach and who likely have a vested interest in seeing her and her friends’ and her collaborators’ work published. This is not particularly surprising (or even unethical), though the consequences are not ideal, as we will see in a moment.
In any case, if you are outside the field looking in, there are two quick and dirty ways you can identify whether a researcher belongs to a particular camp, and if so, which one. First, you look at who they cite. In psychology, there are certain entrenched camps that will only cite each other with any regularity, and will, as a matter of course, not cite researchers outside their group, even when they conduct research on the same topics . This is a clever means of ensuring that scientists with similar ideas rise to the top, while competitors stay down. Since citation rates are often taken as a measure of merit and impact, well-cited papers will then become even better cited papers. It’s power by numbers!
Second, you look up the researcher’s relationships on a neurotree schematic, which shows you who their advisors and advisees were, and who they went to grad school with. With many cognitive scientists, there are surprisingly close parallels between their citation records and their ‘ancestral’ relationships (both top-down and across the page). I say surprising, because while you might expect a certain degree of continuity, the degree you find – particularly among researchers at elite institutions – is almost astonishing .
To be fair, continuity has its rightful place in science. It seems important, for example, that past work not be forgotten, lest we be forced to 'rediscover' it . Moreover, in many disciplines in which the theoretical foundations have long since been established, continuity in research is practically a given.
However, the study of the mind represents a fundamentally different enterprise, because the theoretical and philosophical underpinnings of the field are built on shifting sands. Many of the 'big questions' about language, memory, consciousness, and so on, have yet to be decided. Indeed there's still ongoing logical and empirical debates over how to even begin to study these topics.
Yet, this is almost certainly not the impression you would get if you were to conduct a sociological examination of the elite of academic psychology. To the contrary: the relationships established between senior colleagues and their youthful protégés can appear downright incestuous. As happens quite frequently, the research questions of one generation are unloaded straight on down to the next, without criticism or comment. Surely the young camp of researchers has new methods up their sleeves, and maybe new math, but they're humming along to the strains of an all too familiar tune. And this happens even when it's clear that "the same old song" isn't solving anyone's problems anymore.
This is because, among the elite, fear of change -- real change -- is rampant.
And not without good reason. The life of every good scientist is, I think, rooted in the belief that the empirical process is a means of profound discovery; of somehow grasping onto something 'real,' that has never before been observed. It is a path through which we may generate meaning in our lives, and a deeper knowledge into some aspect of the world, which we then share. In this, science can be both a personal and a civic endeavor.
It must be profoundly jarring then, to consider the possibility -- after a life time of such work -- that one's best years were spent idly chasing after illusions. How does one come to terms with the idea that a life devoted to science was, in some sense, wasted on rotted ideas? For the elders of a scientific community to christen a sea change, they must turn their back on not only their work, but themselves; they must renounce those lost years spent wandering in the desert.
But this is not what anyone does. Indeed, the impulse is naturally quite the opposite; it is to stem the tides of change and to keep watchful guard over a cherished legacy. That people act on this impulse is not so much dishonest or corrupt as blind. Belief in the meaning and importance of one's work can take on a quality not unlike religious fervor, which goes far beyond what is 'rational.'
And so this is something like what happens: The elder scientists stay close at the heels of the young, and handpick the bright new stars and talents that best embody their visions of the future (--which, by necessity, are visions of the past). These chosen few are then shepherded through the hiring and tenure processes at the top schools, and cited and quoted by their guardians in all the best journals, and nominated for the most prestigious awards (--of the kind that not just anyone can apply for, of course). And in this way, the senior scientists keep a steady grip on the coming age, which holds the bright and steady promise of their chosen descendants, who will hold the reigns over the next generation.
Which feeds directly into the prickly issue of publication, which is in large part dominated by these handpicked darlings for reasons too various and sundry to easily summarize... But consider -- they are, of course, talented and prolific; they are backed by powerful institutions; they are the chosen heirs of large theoretical camps from which they draw favorable reviewers, editors, and citations; they know all the most powerful people in the field -- who know all the editors -- and so on .
This is why, in a field that should be searching for a science, everyone's still asking the same old questions.
Reading this, perhaps you feel a shock of recognition, as if you've heard this story before. Of course you have... This is the story of most every human enterprise that deals in power. It is the story of military histories and corporate bureaucracies; the story of modern day political systems and secret societies that time has long since forgot. Perhaps it is part of the larger story of human nature (or perhaps it is merely a product of the strange circumstances we find ourselves in). But familiar or no, there is something profoundly unsettling here. Because this story belongs to science.
And it seems somehow -- though perhaps you can't yet put your finger on why -- that politics, as such, has no place in science.
I will tell you a story that nails down just why:
Observing the sordid machinations and inner-dealings of the academic world, I have often been reminded of a conversation I once had with a wealthy stranger, who is an up and coming gallerist in the art world. Modern art, he explained, was in fact a carefully constructed and socially acceptable form of money laundering. To counter my obvious incredulity, he asked: “What has less objective value than a piece of art?” And then, when I raised an eyebrow: “Imagine I throw a smattering of paint on a canvas. Who are you to say what that’s worth?”
The gloss of his thesis was that art is a form of investment that has the worst sort of insider-trading. The art world 'elite' buy art that they expect will increase in value. But how to ensure the work appreciates? Simple: by making the artist a star. Once the elite has annointed the artist as the next-big-thing, they buy up all his work. And in so doing, the artist becomes the next-big-thing, and a painting that might have sold for hundreds on Tuesday, is selling for hundreds of thousands by Friday. The elite, and their investors, stand to profit handsomely.
I found this hard to believe and asked how this could possibly fit with what I knew about art criticism. “You mean art bollocks?” was the gallerist's snide response.
While we might reasonably expect more objectivity in the practice of science, there is an unfortunate parallel to be drawn here. If we think of citation rates like sales, insider-citing and -reviewing start looking like insider-trading far too quickly.
Perhaps you have yet to see the parallels, so let me push back a step:
In science, an idea isn't worth anything until we collectively make it so. Until we cite it, and repeat it to our friends, and use it to frame the way we see the world and the way we design our experiments, why -- until then, it's simply a couple of lines on a piece of paper. It has no social or intellectual currency.
But science isn't like art in one very important sense : which is that the scientific process is founded on the idea that we can measure and evaluate our ideas empirically ; that is, in some sense objectively. There is a degree to which we can be 'right' or 'wrong' about something. We can pick the right statistical analyses to understand our data or the wrong ones; we can design rigorous methods for our experiments or sloppy ones; we can, it turns out, buy into theories that don't match our data very well, despite our most fervent hopes.
This is why the review process is so unbelievably important to science, in general, and to young fields like psychology, in particular. Because if research gets through the process that shouldn't -- or if, conversely, research doesn't make it through the process that should -- then we end up collectively buying into the wrong ideas. We end up writing the wrong stories and designing the wrong experiments. And while this might appear at first as something like progress, in fact, it's just the opposite. We're dreaming up epicycles all over again .
Given that psychology has yet to establish a firm tradition of inquiry, it is critical that we discover -- empirically -- what the best theoretical modes and investigative approaches are in grappling with the study of mind. This cannot be resolved by fiat, and it should not be decided by politics or popular theories, which may well turn out to be wrong.
No: what is required is a culture of honest and competent reviewing, which would allow for the dissemination of research on the basis of scientific rigor and advance.
But such a culture does not exist in psychology, and it cannot, so long as we maintain a system of anonymous peer review. The system was put in place to protect us from the spite and vindictiveness of individuals, but it has become a system that perpetuates and intensifies our politics, stratifies our ranks, and allows us to forget the humanness of each other, and the joint purpose that we, as scientists, share.
Changing the Culture through the Journals
The question is -- what is to be done? In a political system, the notion that the powerful will do what they can to maintain their power is hardly surprising. To the contrary, it is quite rational behavior. The only way to work around it is either to change human nature –unlikely! – or to change the nature of the system. As Stanley Milgram and others have observed, humans are creatures of context. Change the context and you can change the behavior, for better or worse.
The review process is one such ‘context,’ that I believe is ripe for change. As I mentioned earlier, scientists in established camps are often reviewed by ‘friendly’ in-group peers. This means that so long as they already belong to a camp, their work is far more likely to face an uncritical and fairly easy review process, and to accumulate more citations post-publication. All the better for them.
However, if you do not belong to such a group; if your ideas go against the grain; or if you try to publish work that is cross-discipline, you may run into any number of problems. For illustration, I'll cite some of the usual suspects from my own experience:
1) Competence. If your experimental and theoretical work bridges a number of subfields, you may find yourself assigned reviewers who simply do not have the background or sophistication to review our work on its technical merits. Having gotten reviewers at top-flight journals who couldn’t grasp the use of d-prime, and didn’t understand the basics of learning theory, I’ve joked that we might as well have comp-lit theorists reviewing our work. (“I know I could try to understand the math, but textual analysis is so in vogue these days…!”)
2) Politiking. If your theoretical work is threatening or offensive to any of the camps, and you aren’t in a camp yourself, you are almost guaranteed to attract the occasional vicious attack in review, particularly at high level journals. This is certainly true of the lab I work in, and I know that we're certainly not the only ones. To be fair, our experience may be particularly bad : as of this year, all but one of the papers I've overseen or co-authored has gotten a bimodal distribution in review: either the reviewers love it, and strongly urge publication, or they hate it, and go on long tears about why it’s irrelevant or not 'in keeping with the literature' at large. This is almost shockingly political. The most aggressive responses that I've seen in review are invariably from scientists that are theoretically 'opposed' to our work (always delightful) or from scientists who, uh, appear to hate math, and want to keep psychology model-free to the extent possible . Unfortunately, in addition to this kind of hostility, I've also encountered serious dishonesty in review on several occasions, in which the reviewer blatantly mischaracterized either the model, the results or the relation of the literature to our work (and sometimes all three).
Both politiking and lack of competence can lead to significant delays in the review process and to unfairly time-consuming revisions and defenses that can take years. At some journals, I’ve encountered hostile reviewers who will make an extensive set of revision requests in first-round, only to make another, completely different set of requests in second and third. (The process is, in its gentle way, a Sisyphean nightmare; it reminds me of high school art teacher who, unhappy with one of my oils, asked that I repaint the color scheme in blues and grays. A week later, after I had spent many a sleepless night at my easel, I presented her with the finished canvas. “Did I say blue and gray?” she asked, tilting her head. “Because what I meant was green and black…”)
Situations like this would not arise if journals required that reviewers openly identify themselves, their specialty, and their competence to review the paper, and if, in addition, the journal made the paper’s reviews and responses available online for general inspection and comment throughout the process. This would have several obvious and immediate benefits:
First, it would effectively stop reviewers from employing dishonest argumentative strategies, and from trying to tear apart work that they weren't equipped to engage with in the first place. Removing anonymity would remove much of the underhandedness from the system, and opening up review to public comment would help catch and correct both author and reviewer error. And it would mean that whether you were 'in-group' or not, your work would be subjected to similarly stringent standards.
Second, it would empower authors to respond to the strengths and weaknesses of their reviewers, so that they could give more considered replies to the reviews they received. If a reviewer were clearly too biased, too lazy, or too incompetent to fairly adjudicate the work, the author would then be able to use this as a line of defense or appeal (even if the editor ultimately decided otherwise). This would in turn put pressure on editors to select the fairest cross-section of reviewers possible -- which, as I mentioned in an earlier post -- is not always done.
Finally, by opening up pre-publication papers and reviews to public commentary, and by archiving these materials, journals could create an important, useable record of review. Researchers could then cite and share various arguments, tutorials, statistical critiques or literature suggestions made in past reviews in subsequent ones. It would be much like MIT OpenCourseWare, except aimed at the review process, and it would allow authors and reviewers to use established precedent to strengthen their papers and critiques. This could lead to more universal standards in review.
In addition to these benefits -- which are considerable -- transparency would make the review process faster and more efficient for all papers. There are several reasons for this, the most obvious being that if reviews and editorial decisions were dated, it would apply significant pressure on reviewers and editors to stick fast to deadlines. What's even more appealing, however, is the prospect that it might lessen the work load of reviewers, editors, and authors alike.
Here's how : if every journal had access to prior reviews and editorial decisions on a given paper, making decisions on how to proceed with that paper would become much easier. For instance, imagine that a technically-sound paper that had already secured several reviews was facing rejection at a top-tier journal for failing to establish broad interest. Specialty journals could then contact the authors with immediate offers of publication, or suggest that they would happily publish the paper if X, Y and Z were addressed. Conversely, a journal might decide off the bat that a paper that had already been shown to suffer from serious statistical or methodological errors would not survive another review process, and could reject the paper upfront. Within this space, there would be ample middle ground : for example, an author might make a strong case for what had gone wrong in a previous review process, or demonstrate how new revisions to the paper suitably fixed old problems raised in review -- editors could then respond as they saw fit. Ultimately, what all this would mean, is that papers would be seen by fewer reviewers, and editors and authors would expend much less time on the review process, and could better concentrate their efforts on their respective research programs.
In search of a science
Think back, for a moment, to the appalling theater of Zimbardo's prison experiment (and the Milgram experiments). We exist in a political and, at times, ideologically charged academic culture. Given this state of affairs, it is hardly surprising that we have a broken review system. Certainly, the system can work for some scientists and some papers; there is certainly no deficit of ethical reviewers and industrious editors. But at the same time, there are reviewers who are lazy, aggressive, dishonest and incompetent, and there are editors who are, quite frankly, corrupt. And in that gray middle ground, there are many of us who perhaps could be doing better. Who perhaps wouldn't be so ideologically heated, if we didn't think it was expected or necessary, or if we knew the world were watching. Who might confess that we really didn't know enough math to assess that model, but tried our best, and didn't want to overreach in our conclusions. Who might try harder to understand and assess the paper if we expected that other people would be commenting on our analysis.
Opening up review to transparency makes it a 'graded' process. It introduces scientific and reviewer accountability into the system, and allows for the creation of higher standards and agreed upon precedents. It also means that papers can be continually subject to re-analysis and review as our science progresses. And as much as possible, it takes the politics out of the science.
In sum: If the publication process were more of merit-based, democratic enterprise -- which rewarded its authors for their intellectual rigor and not their institutional or theoretical affiliations -- psychology might begin to look something like a science, instead of a field desperately in search of a science. Or so we should hope...
[In tomorrow’s post, I will relate a series of unfortunate events, and go into somewhat greater detail about the standard arguments made against opening up the review process (and why they’re all ‘bollocks,’ quite frankly). For now, dear readers: life is but a dream! Don’t rock the boat?]
 When scientists go out of their way not to cite someone doing work in a similar area, this can (and does) result in sublte forms of plagiarism. Citations are the lifeblood of a scientist, and if a paper by an in-group kid ‘borrows’ an idea from an out-group kid without citation, that idea will forever be remembered and attributed to the popular one. It's happened to me. It's painful.
 From what I can gather, this does not hold in other scientific fields, such as chemistry or physics. Correct me if I'm wrong --
 A young philosopher who studies the history of science once told me that researchers tend not to cite anything older than fifteen years, and that the discoveries of the fairly recent past are often ‘rediscovered’ every couple decades. Charming hearsay, admittedly.
 This will, of course, be seen as something of a caricature. Rightly so.. But undoubtedly, to write on something as complex as this is (invariably) to make a caricature of it.
 Once we've bought into the bad ideas, we become invested in them. They are no longer simply the currency of the field; they are our currency.
 The standard response goes something like, “Why did you have to introduce math into this? I hate math! That’s why I studied psychology!! Like, I didn’t even get how your model WORKED.” (…I wish I were joking. Put a psychologist in a corner, and s/he starts sounding like a valley girl so fast)