Eyes Wide Shut : A Field in Search of a Science

Aug 25 2010 Published by under Forget What You've Read!

Today's post is the third in a series on the politics of ideas, and examines the current political climate of psychology and cognitive science.  In earlier posts, I discussed how certain crooked editorial practices can effectively subvert the review process, and how lack of transparency in review breeds precisely the kind of culture that anonymous-review was designed to undermine.  Today, I address the question of why these problems exist in the first place, and explore how changing the culture of review may also change the culture of the field -- for the better.  Enclosed are tales of incest, money laundering, and epicycles.  Count yourself forewarned.  If you would like to skip straight to my practical suggestions, these can be found in the second-to-last section: "Changing the Culture through the Journals."

An Incestuous Lineage : The Politics of Continuity

In psychology, many researchers belong to fairly specific subfields.  For example, a researcher may be a particular brand of mathematical modeler or may work on a highly specific question in childhood development.  This means that a scientist’s work is frequently evaluated by other people in her camp, who may be highly receptive to her general approach and who likely have a vested interest in seeing her and her friends’ and her collaborators’ work published.  This is not particularly surprising (or even unethical), though the consequences are not ideal, as we will see in a moment.

In any case, if you are outside the field looking in, there are two quick and dirty ways you can identify whether a researcher belongs to a particular camp, and if so, which one. First, you look at who they cite.  In psychology, there are certain entrenched camps that will only cite each other with any regularity, and will, as a matter of course, not cite researchers outside their group, even when they conduct research on the same topics [1].  This is a clever means of ensuring that scientists with similar ideas rise to the top, while competitors stay down. Since citation rates are often taken as a measure of merit and impact, well-cited papers will then become even better cited papers.  It’s power by numbers!

Second, you look up the researcher’s relationships on a neurotree schematic, which shows you who their advisors and advisees were, and who they went to grad school with.  With many cognitive scientists, there are surprisingly close parallels between their citation records and their ‘ancestral’ relationships (both top-down and across the page).  I say surprising, because while you might expect a certain degree of continuity, the degree you find – particularly among researchers at elite institutions – is almost astonishing [2].

To be fair, continuity has its rightful place in science.  It seems important, for example, that past work not be forgotten, lest we be forced to 'rediscover' it [3].  Moreover, in many disciplines in which the theoretical foundations have long since been established, continuity in research is practically a given.

However, the study of the mind represents a fundamentally different enterprise, because the theoretical and philosophical underpinnings of the field are built on shifting sands.  Many of the 'big questions' about language, memory, consciousness, and so on, have yet to be decided.  Indeed there's still ongoing logical and empirical debates over how to even begin to study these topics.

Yet, this is almost certainly not the impression you would get if you were to conduct a sociological examination of the elite of academic psychology.  To the contrary: the relationships established between senior colleagues and their youthful protégés can appear downright incestuous.  As happens quite frequently, the research questions of one generation are unloaded straight on down to the next, without criticism or comment.  Surely the young camp of researchers has new methods up their sleeves, and maybe new math, but they're humming along to the strains of an all too familiar tune.  And this happens even when it's clear that "the same old song" isn't solving anyone's problems anymore.

This is because, among the elite, fear of change -- real change -- is rampant.

And not without good reason.  The life of every good scientist is, I think, rooted in the belief that the empirical process is a means of profound discovery; of somehow grasping onto something 'real,' that has never before been observed.  It is a path through which we may generate meaning in our lives, and a deeper knowledge into some aspect of the world, which we then share.  In this, science can be both a personal and a civic endeavor.

It must be profoundly jarring then, to consider the possibility -- after a life time of such work -- that one's best years were spent idly chasing after illusions.  How does one come to terms with the idea that a life devoted to science was, in some sense, wasted on rotted ideas?  For the elders of a scientific community to christen a sea change, they must turn their back on not only their work, but themselves; they must renounce those lost years spent wandering in the desert.

But this is not what anyone does.  Indeed, the impulse is naturally quite the opposite; it is to stem the tides of change and to keep watchful guard over a cherished legacy.  That people act on this impulse is not so much dishonest or corrupt as blind. Belief in the meaning and importance of one's work can take on a quality not unlike religious fervor, which goes far beyond what is 'rational.'

And so this is something like what happens: The elder scientists stay close at the heels of the young, and handpick the bright new stars and talents that best embody their visions of the future (--which, by necessity, are visions of the past).  These chosen few are then shepherded through the hiring and tenure processes at the top schools, and cited and quoted by their guardians in all the best journals, and nominated for the most prestigious awards (--of the kind that not just anyone can apply for, of course).  And in this way, the senior scientists keep a steady grip on the coming age, which holds the bright and steady promise of their chosen descendants, who will hold the reigns over the next generation.

Which feeds directly into the prickly issue of publication, which is in large part dominated by these handpicked darlings for reasons too various and sundry to easily summarize... But consider -- they are, of course, talented and prolific; they are backed by powerful institutions; they are the chosen heirs of large theoretical camps from which they draw favorable reviewers, editors, and citations; they know all the most powerful people in the field -- who know all the editors -- and so on [4].

This is why, in a field that should be searching for a science, everyone's still asking the same old questions.

Reading this, perhaps you feel a shock of recognition, as if you've heard this story before.  Of course you have... This is the story of  most every human enterprise that deals in power.  It is the story of military histories and corporate bureaucracies; the story of modern day political systems and secret societies that time has long since forgot.  Perhaps it is part of the larger story of human nature (or perhaps it is merely a product of the strange circumstances we find ourselves in).  But familiar or no, there is something profoundly unsettling here.  Because this story belongs to science.

And it seems somehow -- though perhaps you can't yet put your finger on why -- that politics, as such, has no place in science.

The Gallerist

I will tell you a story that nails down just why:

Observing the sordid machinations and inner-dealings of the academic world, I have often been reminded of a conversation I once had with a wealthy stranger, who is an up and coming gallerist in the art world.  Modern art, he explained, was in fact a carefully constructed and socially acceptable form of money laundering.  To counter my obvious incredulity, he asked: “What has less objective value than a piece of art?” And then, when I raised an eyebrow: “Imagine I throw a smattering of paint on a canvas.  Who are you to say what that’s worth?”

The gloss of his thesis was that art is a form of investment that has the worst sort of insider-trading.  The art world 'elite' buy art that they expect will increase in value.  But how to ensure the work appreciates?  Simple: by making the artist a star.  Once the elite has annointed the artist as the next-big-thing, they buy up all his work.  And in so doing, the artist becomes the next-big-thing, and a painting that might have sold for hundreds on Tuesday, is selling for hundreds of thousands by Friday.  The elite, and their investors, stand to profit handsomely.

I found this hard to believe and asked how this could possibly fit with what I knew about art criticism.  “You mean art bollocks?” was the gallerist's snide response.

While we might reasonably expect more objectivity in the practice of science, there is an unfortunate parallel to be drawn here.  If we think of citation rates like sales, insider-citing and -reviewing start looking like insider-trading far too quickly.

Perhaps you have yet to see the parallels, so let me push back a step:

In science, an idea isn't worth anything until we collectively make it so.  Until we cite it, and repeat it to our friends, and use it to frame the way we see the world and the way we design our experiments, why -- until then, it's simply a couple of lines on a piece of paper.  It has no social or intellectual currency.

But science isn't like art in one very important sense : which is that the scientific process is founded on the idea that we can measure and evaluate our ideas empirically ; that is, in some sense objectively.  There is a degree to which we can be 'right' or 'wrong' about something.  We can pick the right statistical analyses to understand our data or the wrong ones; we can design rigorous methods for our experiments or sloppy ones; we can, it turns out, buy into theories that don't match our data very well, despite our most fervent hopes.

This is why the review process is so unbelievably important to science, in general, and to young fields like psychology, in particular.  Because if research gets through the process that shouldn't -- or if, conversely, research doesn't make it through the process that should -- then we end up collectively buying into the wrong ideas.  We end up writing the wrong stories and designing the wrong experiments.  And while this might appear at first as something like progress, in fact, it's just the opposite.  We're dreaming up epicycles all over again [5].

Given that psychology has yet to establish a firm tradition of inquiry, it is critical that we discover -- empirically -- what the best theoretical modes and investigative approaches are in grappling with the study of mind.  This cannot be resolved by fiat, and it should not be decided by politics or popular theories, which may well turn out to be wrong.

No: what is required is a culture of honest and competent reviewing, which would allow for the dissemination of research on the basis of scientific rigor and advance.

But such a culture does not exist in psychology, and it cannot, so long as we maintain a system of anonymous peer review.  The system was put in place to protect us from the spite and vindictiveness of individuals, but it has become a system that perpetuates and intensifies our politics, stratifies our ranks, and allows us to forget the humanness of each other, and the joint purpose that we, as scientists, share.

Changing the Culture through the Journals

The question is -- what is to be done?  In a political system, the notion that the powerful will do what they can to maintain their power is hardly surprising.  To the contrary, it is quite rational behavior.  The only way to work around it is either to change human nature –unlikely! – or to change the nature of the system.  As Stanley Milgram and others have observed, humans are creatures of context.  Change the context and you can change the behavior, for better or worse.

The review process is one such ‘context,’ that I believe is ripe for change.  As I mentioned earlier, scientists in established camps are often reviewed by ‘friendly’ in-group peers.  This means that so long as they already belong to a camp, their work is far more likely to face an uncritical and fairly easy review process, and to accumulate more citations post-publication.  All the better for them.

However, if you do not belong to such a group; if your ideas go against the grain; or if you try to publish work that is cross-discipline, you may run into any number of problems. For illustration, I'll cite some of the usual suspects from my own experience:

1) Competence. If your experimental and theoretical work bridges a number of subfields, you may find yourself assigned reviewers who simply do not have the background or sophistication to review our work on its technical merits.  Having gotten reviewers at top-flight journals who couldn’t grasp the use of d-prime, and didn’t understand the basics of learning theory,  I’ve joked that we might as well have comp-lit theorists reviewing our work.  (“I know I could try to understand the math, but textual analysis is so in vogue these days…!”)

2) Politiking.  If your theoretical work is threatening or offensive to any of the camps, and you aren’t in a camp yourself, you are almost guaranteed to attract the occasional vicious attack in review, particularly at high level journals.  This is certainly true of the lab I work in, and I know that we're certainly not the only ones.  To be fair, our experience may be particularly bad : as of this year, all but one of the papers I've overseen or co-authored has gotten a bimodal distribution in review: either the reviewers love it, and strongly urge publication, or they hate it, and go on long tears about why it’s irrelevant or not 'in keeping with the literature' at large.  This is almost shockingly political.  The most aggressive responses that I've seen in review are invariably from scientists that are theoretically 'opposed' to our work (always delightful) or from scientists who, uh, appear to hate math, and want to keep psychology model-free to the extent possible [6].  Unfortunately, in addition to this kind of hostility, I've also encountered serious dishonesty in review on several occasions, in which the reviewer blatantly mischaracterized either the model, the results or the relation of the literature to our work (and sometimes all three).

Both politiking and lack of competence can lead to significant delays in the review process and to unfairly time-consuming revisions and defenses that can take years.  At some journals, I’ve encountered hostile reviewers who will make an extensive set of revision requests in first-round, only to make another, completely different set of requests in second and third.  (The process is, in its gentle way, a Sisyphean nightmare; it reminds me of high school art teacher who, unhappy with one of my oils, asked that I repaint the color scheme in blues and grays.  A week later, after I had spent many a sleepless night at my easel, I presented her with the finished canvas.  “Did I say blue and gray?” she asked, tilting her head.  “Because what I meant was green and black…”)

Situations like this would not arise if journals required that reviewers openly identify themselves, their specialty, and their competence to review the paper, and if, in addition, the journal made the paper’s reviews and responses available online for general inspection and comment throughout the process.  This would have several obvious and immediate benefits:

First, it would effectively stop reviewers from employing dishonest argumentative strategies, and from trying to tear apart work that they weren't equipped to engage with in the first place.  Removing anonymity would remove much of the underhandedness from the system, and opening up review to public comment would help catch and correct both author and reviewer error.  And it would mean that whether you were 'in-group' or not, your work would be subjected to similarly stringent standards.

Second, it would empower authors to respond to the strengths and weaknesses of their reviewers, so that they could give more considered replies to the reviews they received.  If a reviewer were clearly too biased, too lazy, or too incompetent to fairly adjudicate the work, the author would then be able to use this as a line of defense or appeal (even if the editor ultimately decided otherwise).  This would in turn put pressure on editors to select the fairest cross-section of reviewers possible -- which, as I mentioned in an earlier post -- is not always done.

Finally, by opening up pre-publication papers and reviews to public commentary, and by archiving these materials, journals could create an important, useable record of review.  Researchers could then cite and share various arguments, tutorials, statistical critiques or literature suggestions made in past reviews in subsequent ones.  It would be much like MIT OpenCourseWare, except aimed at the review process, and it would allow authors and reviewers to use established precedent to strengthen their papers and critiques.  This could lead to more universal standards in review.

In addition to these benefits -- which are considerable -- transparency would make the review process faster and more efficient for all papers.  There are several reasons for this, the most obvious being that if reviews and editorial decisions were dated, it would apply significant pressure on reviewers and editors to stick fast to deadlines.  What's even more appealing, however, is the prospect that it might lessen the work load of reviewers, editors, and authors alike.

Here's how : if every journal had access to prior reviews and editorial decisions on a given paper, making decisions on how to proceed with that paper would become much easier.  For instance, imagine that a technically-sound paper that had already secured several reviews was facing rejection at a top-tier journal for failing to establish broad interest.  Specialty journals could then contact the authors with immediate offers of publication, or suggest that they would happily publish the paper if X, Y and Z were addressed.  Conversely, a journal might decide off the bat that a paper that had already been shown to suffer from serious statistical or methodological errors would not survive another review process, and could reject the paper upfront.  Within this space, there would be ample middle ground : for example, an author might make a strong case for what had gone wrong in a previous review process, or demonstrate how new revisions to the paper suitably fixed old problems raised in review  -- editors could then respond as they saw fit.  Ultimately, what all this would mean, is that papers would be seen by fewer reviewers, and editors and authors would expend much less time on the review process, and could better concentrate their efforts on their respective research programs.

In search of a science

Think back, for a moment, to the appalling theater of Zimbardo's prison experiment (and the Milgram experiments). We exist in a political and, at times, ideologically charged academic culture.  Given this state of affairs, it is hardly surprising that we have a broken review system.  Certainly, the system can work for some scientists and some papers; there is certainly no deficit of ethical reviewers and industrious editors.  But at the same time, there are reviewers who are lazy, aggressive, dishonest and incompetent, and there are editors who are, quite frankly, corrupt.  And in that gray middle ground, there are many of us who perhaps could be doing better.  Who perhaps wouldn't be so ideologically heated, if we didn't think it was expected or necessary, or if we knew the world were watching.  Who might confess that we really didn't know enough math to assess that model, but tried our best, and didn't want to overreach in our conclusions.  Who might try harder to understand and assess the paper if we expected that other people would be commenting on our analysis.

Opening up review to transparency makes it a 'graded' process.  It introduces scientific and reviewer accountability into the system, and allows for the creation of higher standards and agreed upon precedents.  It also means that papers can be continually subject to re-analysis and review as our science progresses.  And as much as possible, it takes the politics out of the science.

In sum:  If the publication process were more of merit-based, democratic enterprise -- which rewarded its authors for their intellectual rigor and not their institutional or theoretical affiliations -- psychology might begin to look something like a science, instead of a field desperately in search of a science.  Or so we should hope...

[In tomorrow’s post, I will relate a series of unfortunate events, and go into somewhat greater detail about the standard arguments made against opening up the review process (and why they’re all ‘bollocks,’ quite frankly).  For now, dear readers: life is but a dream! Don’t rock the boat?]

[This is the third in a series of posts on the politics of ideas in cognitive science and psychology, and is properly speaking, three posts in one!; see also 1, 2]

Last Words

[1]  When scientists go out of their way not to cite someone doing work in a similar area, this can (and does) result in sublte forms of plagiarism.  Citations are the lifeblood of a scientist, and if a paper by an in-group kid ‘borrows’ an idea from an out-group kid without citation, that idea will forever be remembered and attributed to the popular one.  It's happened to me.  It's painful.

[2]  From what I can gather, this does not hold in other scientific fields, such as chemistry or physics.  Correct me if I'm wrong --

[3] A young philosopher who studies the history of science once told me that researchers tend not to cite anything older than fifteen years, and that the discoveries of the fairly recent past are often ‘rediscovered’ every couple decades.  Charming hearsay, admittedly.

[4] This will, of course, be seen as something of a caricature.  Rightly so..  But undoubtedly, to write on something as complex as this is (invariably) to make a caricature of it.

[5]  Once we've bought into the bad ideas, we become invested in them.  They are no longer simply the currency of the field; they are our currency.

[6] The standard response goes something like, “Why did you have to introduce math into this?  I hate math!  That’s why I studied psychology!!  Like, I didn’t even get how your model WORKED.”  (…I wish I were joking.  Put a psychologist in a corner, and s/he starts sounding like a valley girl so fast)

19 responses so far

  • Christina Pikas says:

    This is all very Kuhnian and the passed-down-from-ancestors is exactly the invisible college described by Price, Crane, et al in the 1960s and 1970s. I can't say, as a complete outsider, that I'm surprised it's like this, but I guess I'm surprised you're willing to talk about it! What Kuhn would say is that there will be a revolution and some small group will break away from the handed down "normal" science to start some new group. What he didn't spend a lot of time on is what this feels like for the revolutionaries - probably sucks!
    As for passing reviews on, there has been an experiment in neuroscience, but I haven't seen any evaluation to see how it went, what participants thought of it, or even if it continued after the first try.

    • melodye says:

      Thanks for your comment, Christina! A friend of mine on Facebook commented that writing something like this openly is a surefire way to get blacklisted in academia. Which is funny right? -- because maybe it's true. But that's precisely why I write; because that is the culture I want to see changed. Because that is the kind of culture that has no rightful place in the world of ideas.

  • Stan says:

    Excellent, insightful post. I'm sad to say that the culture you describe is one of the reasons I'm glad I drifted from formal scientific research. Regarding resistance to change: yes, even the greatest scientists cling to familiar models far beyond the point of sense or utility. Not only can it be difficult to continually adjust to new data and ways to interpret them, it can also be painful to consider one's time 'lost'.

    Your thoughts on objectivity are interesting. I think there are degrees of objectivity, but science has been hamstrung by the assumption that absolute objectivity is/was possible – or desirable – when it seems to me more a stubborn myth. (The historical influence of physics is especially potent here.)

    When I studied science in uni, we were taught little or nothing about the history, the epistemology, the politics or the culture of science. (Only after I 'left' science did I begin learning about these subjects in earnest.) It's as though such areas were considered irrelevant, if not dangerous. In other words, we were trained (perhaps by unconscious omission) to adopt the prevailing academic and industrial climates: their priorities, ideas, operations, axioms, and blind spots. It's a recipe for stasis and stagnation.

    Since you mentioned Milgram a couple of times, you might be interested in this article in the new Psychologist about the making of his infamous experiment. Neil Postman, in Technopoly, wrote a very interesting critical analysis of it, suggesting that it revealed less about people (i.e. nothing we didn't already know) than about the biases and shortcomings of social science.

    • melodye says:

      I sort of kicked myself after reading your comment for writing so sloppily about objectivity. I haven't done as much philosophy of science as I would like, but enough to think that 'objectivity' in science is still very much a human construct. Regardless, I do think that we, as scientists, would like to say that there are significant differences between 'art' and 'science,' at least as we approach them as human enterprises. Even if our standards in science are imperfect, we at least like to think we have them. I think the art world, in general, is far more confused. (And not because I like art less, actually).

      Just to flesh out that statement a bit -- I took a philosophy degree as an undergraduate and spent much of my senior research looking at aesthetic experience and how philosophers have approached that question (which then touches on the question of how we can rightly judge art, and even if we can). It was nightmarish, really, reading the stuff, because I never felt like anyone quite got off the ground with their theories on it. It was all so much 'ineffable' 'intersubjective' waffling. I mean, I was enchanted by it, don't get me wrong, but I think art is very far off from establishing anything like the sort of criteria we have in science to adjudicate our work.

      The problem in science, of course, is that we necessarily rely on other humans to apply those standards. This is where the proposal to switch to an open-access 'wiki-like' review system comes into play. I think if we engage with our science collectively and collaboratively, and serve as checks on each other, we have a much better chance of producing good science, then if we rely on a handful of -- not necessarily competent and not necessarily fair -- individuals to anonymously judge our work behind closed doors.

  • But this is not what anyone does. Indeed, the impulse is naturally quite the opposite; it is to stem the tides of change and to keep watchful guard over a cherished legacy. That people act on this impulse is not so much dishonest or corrupt as blind.

    This is a very, very excellent post and points out the toxic environment of a discipline in which the truth or falsity of grandiose theories makes or breaks people's careers. However, I disagree that submission to this toxic environment that goes so far as to include fudging, nudging, or faking data is more blind than it is dishonest or corrupt.

    • melodye says:

      I mean -- in a sense, I very much agree with you, but it's not (for my own sanity) how I want to look at it.

      My own view, as I said, is that we are creatures of our environment... which means that I try to steer away from using terms like 'dishonest' or 'corrupt' agentively, because I think that people can be those things without actually meaning to be, or without even realizing what they're doing. They become so enmeshed in a culture of corruption that the norms of that culture become their norms. And suddenly they're just 'surviving' in it, and looking out for their friends, and batting down what looks threatening. I mean, it's why anonymity in review is so perverse. It's precisely what allows this type of culture to survive.

  • A field in search of a science is such great way to put it. The complexity of the human mind unfortunately makes it so much easier to introduce politics/illusions. My undergrad degree is in biochemistry, and I never heard of anything like the decades long vicious debates that go on in psychology. Great post.

  • Incredible post. Your argument for transparency is great, and as somebody new to the world of science, I think that the biggest reason that it might be rejected by journals is because such decisions are made by the people who have the most to gain from the shortcomings of the current system. : (

  • Looks like you have the bad luck to be working in one of the more theoretically-divisive areas of psychology. Of course, that also makes your area very exciting- that is, if you don't become totally despairing because of the frustration involved in trying to publish in the exclusive journals.

    As, author, editor, and reviewer in the areas of vision science and visual attention, I have heard of situations as bad as what you describe, but they seem to be few and far between. Perhaps the areas I work in are (dare I suggest) a more mature science, or at least one more empirically driven, so that people are less likely to get stuck in fundamental disagreements of how the mind should be conceived. On the other hand, maybe my field is stuck in a narrow theoretical framework, and I just don't notice the few theoretical innovators who are being shut out- but I don't think so..

    Although I don't think the problem in my area is as bad as you describe, I'm in favor of many of the reforms you advocate. I think the best thing to do is to keep advocating through blogging, etc. and support journals or other publication outlets that are taking steps in the right direction. For example, journals like PLoS ONE (that I am associated with) are more inclusive than the average journal and are trying to make the evaluation process involve a wider community, by involving post-publication measures of influence and quality. This is one way to open up science to a community beyond the usual suspects who may have dug in their heels to defend the theoretical ground they have established for themselves. You mention pre-publication commentary as a step, which is a similar way forward. I am just talking about doing it post- instead of pre-.

    • melodye says:

      I completely agree. I definitely want to try sending some papers to PLoS ONE. We have something like twelve? thirteen? papers at various stages of the review process, and the backlog is getting unbelievable. (We have almost as many at various stages of drafting) When you have to spend weeks dealing with insanity in review, for each round, and months (or years) waiting on your papers to reach publication, it becomes nearly impossible to publish or get other meaningful work done, particularly when you have as much output as we do. It's just overwhelming. It would be such a relief to deal with a fair review process!

  • Namnezia says:

    "For instance, imagine that a technically-sound paper that had already secured several reviews was facing rejection at a top-tier journal for failing to establish broad interest. Specialty journals could then contact the authors with immediate offers of publication, or suggest that they would happily publish the paper if X, Y and Z were addressed. "

    This actually already exists - check out The Neuroscience Peer Review Consortium here:
    http://nprc.incf.org/

    • melodye says:

      How exciting! Thanks for bringing this to my attention. I wish something similar existed in cognitive.

    • Christina Pikas says:

      Namnezia - that's the one I was thinking about - do you know if it's actually working? If there have been evaluations? Numbers for uptake (articles passed to other journals with reviews?)?

      • Namnezia says:

        As far as I can tell it works. In my case I have not taken advantage because my papers always get into top tier journals ...just kidding... no, I haven't taken advantage because the typical situation has been that whenever I have submitted to a high-level journal the reviews tend to be so nasty and antagonistic that there's no way I would want them passed on to the next journal. So I've opted to take my chances with new reviewers, with better luck. In the cases where one of my papers was rejected solely for not being "novel" enough, it so happens that those journals were not part of the consortium. But as more journals join, I think it will become more useful.

  • Adam says:

    This is a fascinating discussion and I have enjoyed this series, especially coming from a radically different discipline. Melodye, I do think your observations are at least somewhat transferable beyond the narrow discipline scope that you've given them. I'd be curious what you think of an arrangement recently tried by the Shakespeare Quarterly:

    http://www.folger.edu/template.cfm?cid=542

    (and commented on in this blog:
    http://tenured-radical.blogspot.com/2010/08/journal-isms-what-would-it-take-to.html
    Comrade Physioprof, I know you also follow that blog, too)

  • Cory says:

    Same area here.

    I don't want to sound like I'm kicking sand in your eye here. I agree with much of what you have written here with regard to the problems in cogpsych. I also know that these issues can vary wildly depending on both institution and area of research. You personally may be in a doubly unfortunate position in that regard.

  • Mark says:

    Melody:

    If you think you are having trouble,...

    I am a non-academic, non-scientist with a new framework for the origins of both life, and intelligence.

    How do you suppose the peer review journal system treats MY papers? What thinkers like myself need is a forum which will simply receive new ideas, and critique them scientifically in a publically viewable forum.

  • Anon says:

    "As Stanley Milgram and others have observed, humans are creatures of context. Change the context and you can change the behavior, for better or worse.

    The review process is one such ‘context,’ that I believe is ripe for change. "

    -- I'm not sure that Milgram's main goal was to show that humans are creatures of context. I think he was much more interested in exploring the concept of authority (which, many would argue, is different from the concept of blind power, even though you seem to run these two together). I think it was people like Gilbert Harmann who used Milgram's experiments (and others like them) to argue that humans are (mere) creatures of context.

    The problem with those arguments is that they rest on a non sequitur (of numbing grossness), inferring from the fact that behavior can be predicted in general based on context to the conclusion that there is no such thing as "character". The idea was to promote a different kind of moral education (just as you are trying to promote a different kind of review system); rather than trying to teach/train our children self discipline and strength of character, we simply should try to avoid putting them in trying situations.

    One of the (many) things I find troubling about your post is that it rests on similar sorts of non sequiturs and messy reasoning. I'll cite just two examples. Here's the first:

    "the research questions of one generation are unloaded straight on down to the next, without criticism or comment. ... this happens even when it’s clear that “the same old song” isn’t solving anyone’s problems anymore. This is because, among the elite, fear of change — real change — is rampant."

    You claim that change occurs slowly -- perhaps quite slowly -- and that research questions often are passed down from one academic generation to the next (I'm going to breeze by the claim that this is done "without criticism or comment" -- I don't think you really meant this; perhaps it sounded nice as you wrote it, but surely you don't think this).

    You then make a jump to the claim that fear of change (real change? As opposed to fake change?) is rampant. Now you're making claims about people's motives, and I wonder what sort of evidence you have for this. You give a sort of psychological analysis of the researchers here (their fear of change is well grounded according to you, for to confront change might be to confront the fact "that one’s best years were spent idly chasing after illusions"). But surely you realize that psychology "cuts both ways" (to borrow a turn of phrase from Raskolnikov, which seems appropriate, for you seem to be a Raskolnik). That is, one might pose equally plausible sounding explanations that run directly against this rather cynical interpretation of yours.

    *I* fear something here; I fear that you are being drawn down by lyric tune of your writing. I fear something else, too, but I'll come back to that below.

    Here is the second example:

    "In science, an idea isn’t worth anything until we collectively make it so... until then, it’s simply a couple of lines on a piece of paper.... [but] the scientific process is founded on the idea that we can measure and evaluate our ideas empirically ; that is, in some sense objectively.... This is why the review process is so unbelievably important to science, in general, and to young fields like psychology, in particular. Because if research gets through the process that shouldn’t — or if, conversely, research doesn’t make it through the process that should — then we end up collectively buying into the wrong ideas. We end up writing the wrong stories and designing the wrong experiments. And while this might appear at first as something like progress, in fact, it’s just the opposite. We’re dreaming up epicycles all over again."

    There's a lot going on in this passage. For one thing, measuring and evaluating our ideas empirically shouldn't be confused with measuring and evaluating our ideas objectively (in any sense). Some claims can be established as true (or objectively true if you prefer) without any empirical testing, so empirical testing is not a necessary condition for establishing truth. And some claims can survive empirical testing despite the fact that they are false, so empirical testing is not a sufficient condition for establishing truth. For another thing, collectively buying the wrong stories and designing the wrong experiments isn't the same thing as dreaming up epicycles (at least, not in the sense in which this term is used in describing Ptolemy. But you play pretty freely with Kuhnian ideas in these posts, so perhaps I should back off on this point). Moreover, the value claim in the first sentence (an idea isn't worth anything until we make it so) -- well, I just don't understand that. I'm not sure what you're trying to say. The idea (if it's in the form of a proposition) might be true without being made so collectively -- and surely, in that sense, it's worth something.

    I also don't think it's the end of the world if bad research gets through the "process" or good research doesn't; I think we have to accept that in any good peer review system, mistakes will be made. You are quite right if you think -- and I wasn't entirely sure that you think this, but it seems to be something you're driving at -- that the two standards (viz., getting through the peer review process and being true) don't match up all the time. But do you really think that having a non-anonymous review system (or even merely requiring that tenured researchers have their anonymity privileges revoked) would solve this? I don't; I think it would put people at risk (which, to be fair, you acknowledge to some extent).

    Having gone through your psychological analyses of these professors, I'm now going to offer one for what might be driving some of what you say (or, rather, what I fear might be driving some of what you say). I wonder whether you have had some bad experiences with the peer review process *and* have a rather dismal view of human nature (which would be somewhat counterintuitive given what you say about Milgram. But I don't think you're consistent on this point) which is what leads you to assume the worst in so many scenarios (e.g., professors perpetuate their own research b/c they can't face the fact that they devoted their lives the wrong ideas; attacks are personal/political rather than representative of sincerely held beliefs).

    And, as you do, I'll offer a suggestion for change. Perhaps you should accept that people make mistakes, screw up, act in passions, act akratically all the time. But they're doing their best. Even if sometimes it looks like editors are cherry picking or reviewers are vindictive, not everybody is like that -- and of the few that are like that, most are not always like that. So the change that I would suggest would be in how you interpret the (vastly underdetermined) data with which you are (perhaps quite often) confronted. The world is not such a horrid place; perhaps we can't get away from the fact that we are human. But perhaps we don't need to.

Bad Behavior has blocked 499 access attempts in the last 7 days.