Today the Chronicle of Higher Education has an article that bears on the allegation of shenanigans in the research lab of Marc D. Hauser. As the article draws heavily on documents given to the Chronicle by anonymous sources, rather than on official documents from Harvard's inquiry into allegations of misconduct in the Hauser lab, we are going to take them with a large grain of salt. However, I think the Chronicle story raises some interesting questions about the intersection of scientific methodology and ethics.
Archive for the 'Ethical research' category
In recent days, there have been signs on the horizon of an impending blogwar. Prof-like Substance fired the first volley:
[A]lmost all major genomics centers are going to a zero-embargo data release policy. Essentially, once the sequencing is done and the annotation has been run, the data is on the web in a searchable and downloadable format.
How many other fields put their data directly on the web before those who produced it have the opportunity to analyze it? Now, obviously no one is going to yank a genome paper right out from under the group working on it, but what about comparative studies? What about searching out specific genes for multi-gene phylogenetics? Where is the line for what is permissible to use before the genome is published? How much of a grace period do people get with data that has gone public, but that they* paid for?
*Obviously we are talking about grant-funded projects, so the money is tax payer money not any one person’s. Nevertheless, someone came up with the idea and got it funded, so there is some ownership there.
Then, Mike the Mad Biologist fired off this reply:
Several of the large centers, including the one I work at, are funded by NIAID to sequence microorganisms related to human health and disease (analogous programs for human biology are supported by NHGRI). There's a reason why NIH is hard-assed about data release:
Funding agencies learned this the hard way, as too many early sequencing centers resembled 'genomic roach motels': DNA checks in, but sequence doesn't check out.
The funding agencies' mission is to improve human health (or some other laudable goal), not to improve someone's tenure package. This might seem harsh unless we remember how many of these center-based genome projects are funded. The investigator's grant is not paying for the sequencing. In the case of NIAID, there is a white paper process. Before NIAID will approve the project, several goals have to be met in the white paper (Note: while I'm discussing NIAID, other agencies have a similar process, if different scientific objectives).
Obviously, the organism and collection of strains to be sequenced have to be relevant to human health. But the project also must have significant community input. NIAID absolutely does not want this to be an end-run around R01 grants. Consequently, these sequencing projects should not be a project that belongs to a single lab, and which lacks involvement by others in the subdiscipline ("this looks like an R01" is a pejorative). It also has to provide a community resource. In other words, data from a successful project should be used rapidly by other groups: that's the whole point (otherwise, write an R01 proposal). The white paper should also contain a general description of the analysis goals of the project (and, ideally, who in the collaborative group will address them). If you get 'scooped', that's, in part, a project planning issue.
NIAID, along with other agencies and institutes, is pushing hard for rapid public release. Why does NIAID get to call the shots? Because it's their money.
Which brings me to the issue of 'whose' genomes these are. The answer is very simple: NIH's (and by extension, the American people's). As I mentioned above, NIH doesn't care about your tenure package, or your dissertation (given that many dissertations and research programs are funded in part or in their entirely by NIH and other agencies, they're already being generous†). What they want is high-quality data that are accessible to as many researchers as possible as quickly as possible. To put this (very) bluntly, medically important data should not be held hostage by career notions. That is the ethical position.
Prof-like substance hurled back a hefty latex pillow of a rejoinder:
People feel like anything that is public is free to use, and maybe they should. But how would you feel as the researcher who assembled a group of researchers from the community, put a proposal together, drummed up support from the community outside of your research team, produced and purified the sample to be sequenced (which is not exactly just using a Sigma kit in a LOT of cases), dealt with the administration issues that crop up along the way, pushed the project through (another aspect woefully under appreciated) the center, got your research community together once they data were in hand to make sense of it all and herded the cats to get the paper together? Would you feel some ownership, even if it was public dollars that funded the project?
Now what if you submitted the manuscript and then opened your copy of Science and saw the major finding that you centered the genome paper around has been plucked out by another group and publish in isolation? Would you say, “well, the data’s publicly available, what’s unscrupulous about using it?” ...
[L]et’s couch this in the reality of the changing technology. If your choice is to have the sequencing done for free, but risk losing it right off the machine, OR to do it with your own funds (>$40,000) and have exclusive right to it until the paper is published, what are you going to choose? You can draw the line regarding big and small centers or projects all you want, but it is becoming increasingly fuzzy.
This is all to get back to my point that if major sequencing centers want to stay ahead of the curve, they have to have policies that are going to encourage, not discourage, investigators to use them.
It's fair to say that I don't know from genomics. However, I think the ethical landscape of this disagreement bears closer examination.
Dr. Isis posted a case study about a postdoc's departure from approved practices and invited her readers to discuss it. DrugMonkey responded by decrying the ridiculousness of case studies far more black and white than what scientists encounter in real life:
This is like one of those academic misconduct cases where they say “The PI violates the confidence of review, steals research ideas that are totally inconsistent with anything she’d been doing before, sat on the paper review unfairly, called the editor to badmouth the person who she was scooping and then faked up the data in support anyway. Oh, and did we mention she kicked her cat?”.
This is the typical and useless fare at the ethical training course. Obvious, overwhelmingly clear cases in which the black hats and white hats are in full display and provide a perfect correlation with malfeasance.
The real world is messier and I think that if we are to make any advances in dealing with the real problems, the real cases of misconduct and the real cases of dodgy animal use in research, we need to cover more realistic scenarios.
I'm sympathetic to DrugMonkey's multiple complaints: that real life is almost always more complicated than the canned case study; that hardly anyone puts in the years of study and training to become a scientist if her actual career objective is to be a super-villain; and especially that the most useful sort of ethics training for the scientist will be in day to day conversation with scientific mentors and colleagues rather than in isolated ethics courses, training modules, or workshops.
However, used properly, I think that case studies -- even unrealistic ones -- play a valuable role in ethics education.
When you're investigating charges that a scientist has seriously deviated from accepted practices for proposing, conducting, or reporting research, how do you establish what the accepted practices are? In the wake of ClimateGate, this was the task facing the Investigatory Committee at Penn State University investigating the allegation (which the earlier Inquiry Committee deemed worthy of an investigation) that Dr. Michael E. Mann "engage[d] in, or participate[d] in, directly or indirectly, ... actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research or other scholarly activities".
One strategy you might pursue is asking the members of a relevant scientific or academic community what practices they accept. In the last post, we looked at what the Investigatory Committee learned from its interviews about this question with Dr. Mann himself and with Dr. William Easterling, Dean, College of Earth and Mineral Sciences, The Pennsylvania State University. In this post, we turn to the committee's interviews with three climate scientists from other institutions, none of whom had collaborated with Dr. Mann, and at least one of whom has been very vocal about his disagreements with Dr. Mann's scientific conclusions.
I want to apologize for the infrequency of my posting lately. Much of it can be laid at the feet of end-of-term grading, although today I've been occupied with a meeting of scientists at different career stages to which I was invited to speak about some topics I discuss here. (More about that later.) June will have more substantive ethics-y posts, honest!
Indeed, to tide you over, I want to ask for your responses to a case study I wrote for the final exam for my "Ethics in Science" class.
First, the case:
A reader writes:
I was in a PhD program in materials science, in a group that did biomedical research (biomaterials end of the field) and was appalled at the level of misconduct I saw. Later, I entered an MD program. I witnessed some of the ugliest effects of ambition in the lab there.
Do you think biomedical research is somehow "ethically worse" than other fields?
I've always wanted to compare measurable instances of unethical behavior across different fields. As an undergraduate I remember never hearing or seeing anything strange with the folks that worked with metallurgy and it never seemed to be an issue with my colleagues in these areas in graduate school. Whenever there is trouble it seems to come from the biomedical field. I'd love to see you write about that.
Thank you for doing what you do, since that time I have so many regrets, your blog keeps me sane.
First, I must thank this reader for the kind words. I am thrilled (although still a bit bewildered) that what I write here is of interest and use to others, and if I can contribute to someone's sanity while I'm thinking out loud (or on the screen, as the case may be), then I feel like this whole "blogging" thing is worthwhile.
Next, on the question of whether biomedical research is somehow "ethically worse" than research in other areas of science, the short answer is: I don't know.
Certainly there are some high profile fraudsters -- and scientists whose misbehavior, while falling short of official definitions of misconduct, also fell well short of generally accepted ethical standards -- in the biomedical sciences. I've blogged about the shenanigans of biologists, stem cell researchers, geneticists, cancer researchers, researchers studying the role of hormones in aging, researchers studying immunosuppression, anesthesiologists, and biochemists.
But the biomedical sciences haven't cornered the market on ethical lapses, as we've seen in discussions of mechanical engineers, nuclear engineers, physicists, organic chemists, paleontologists, and government geologists.
There are, seemingly, bad actors to be found in every scientific field. Of course, it is reasonable to assume that there are also plenty of honest and careful scientists in every scientific field. Maybe the list of well-publicized bad actors in biomedical research is longer, but given the large number of biomedical researchers compared to the number of researchers in all scientific fields (and also the extent to which the public might regard biomedical research as more relevant to their lives than, say, esoteric questions in organic synthesis), is it disproportionately long?
Again, that's hard to gauge.
However, my correspondent's broad question strikes me as raising a number of related empirical questions that it would be useful to try to answer:
In a post last month, I noted that not all (maybe even not many) supporters of animal rights are violent extremists, and that Bruins for Animals is a group committed to the animal rights position that was happy to take a public stand against the use of violence and intimidation to further the cause of animal liberation.
On Wednesday, Kristy Anderson (the co-founder of Bruins for Animals), Ashley Smith (the president), and Jill Ryther (the group's advisor) posted a critical response to my post. In the spirit of continuing dialogue, I'd like to respond to that response.
AR activists can rightly accept praise and credit for encouraging the two sides to come together in what was an unprecedented public and civil dialogue. However, one glaring and rather twisted irony too often overlooked is the fact that those very same participants who speak against aggressive campaigns against the animal experimentation industry and who are quick to praise AR advocates' stance on nonviolence are themselves engaged in (or are supporters of) violence and intimidation towards sentient beings on a daily basis.
Once again, I'm going to "get meta" on that recent paper on blogs as a channel of scientific communication I mentioned in my last post. Here, the larger question I'd like to consider is how peer review -- the back and forth between authors and reviewers, mediated (and perhaps even refereed by) journal editors -- does, could, and perhaps should play out.
Prefacing his post about the paper, Bora writes:
First, let me get the Conflict Of Interest out of the way. I am on the Editorial Board of the Journal of Science Communication. I helped the journal find reviewers for this particular manuscript. And I have reviewed it myself. Wanting to see this journal be the best it can be, I was somewhat dismayed that the paper was published despite not being revised in any way that reflects a response to any of my criticisms I voiced in my review.
Bora's post, in other words, drew heavily on comments he wrote for the author of the paper to consider (and, presumably, to take into account in her revision of the manuscript) before it was published.
Since, as it turns out, the author didn't make revisions addressing Bora's criticisms that ended up in the published version of the paper, Bora went ahead and made those criticisms part of the (now public) discussion of the published paper. He still endorses those criticisms, so he chooses to share them with the larger audience the paper has now that it has been published.
MommyProf wonders whether some of the goings on in her department are ethical. She presents two cases. I'm going to look at them in reverse order.
Case 2: Faculty member is tenure-track and he and I have collaborated on a paper. He was supposed to work on the literature, and sends me a literature review. It reads a little strangely to me, and I check the properties and find that it was actually written by an undergraduate in one of his classes. I write back to him and ask if that undergrad should be an author on the paper, since it would be a fairly major contribution, and he says yes, he forgot. This faculty member is assigned a graduate student each semester. This semester, the faculty member's graduate student comes to me and said his work has included collecting and analyzing all the data and writing substantial portions of the lit review, but the student is not being credited on the final paper.
This case embodies a number of problems of which we have spoken before, at length.
Indeed, it bears some striking similarities to a case we considered a couple years ago. (In that case, an undergraduate research intern was helping an advanced graduate student in the research group to round up the relevant literature background for their research project ... and the undergraduates summary of that relevant literature crept, word for word, into the graduate student's dissertation.) Here's what I wrote about that case: