Woodgett on modern science careers

(by drugmonkey) Jul 14 2014

Nailed it:

2 responses so far

Oh, you aren't alone in your teeth clenching

(by drugmonkey) Jul 08 2014

27 responses so far

The most replicated finding in drug abuse science

(by drugmonkey) Jul 08 2014

Ok, ok, I have no actual data on this. But if I had to pick one thing in substance abuse science that has been most replicated it is this.

If you surgically implant a group of rats with intravenous catheters, hook them up to a pump which can deliver small infusions of saline adulterated with cocaine HCl and make these infusions contingent upon the rat pressing a lever...

Rats will intravenously self-administer (IVSA) cocaine.

This has been replicated ad nauseum.

If you want to pass a fairly low bar to demonstrate you can do a behavioral study with accepted relevance to drug abuse, you conduct a cocaine IVSA study [Wikipedia] in rats. Period.

And yet. There are sooooo many ways to screw it up and fail to replicate the expected finding.

Note that I say "expected finding" because we must include significant quantitative changes along with the qualitative ones.

Off the top of my head, the types of factors that can reduce your "effect" to a null effect, change the outcome to the extent even a statistically significant result isn't really the effect you are looking for, etc

  • Catheter diameter or length
  • Cocaine dose available in each infusion
  • Rate of infusion/concentration of drug
  • Sex of the rats
  • Age of rats
  • Strain of the rats
  • Vendor source (of the same nominal strain)
  • Time of day in which rats are run (not just light/dark* either)
  • Food restriction status
  • Time of last food availability
  • Pair vs single housing
  • "Enrichment" that is called-for in default guidelines for laboratory animal care and needs special exception under protocol to prevent.
  • Experimenter choice of smelly personal care products
  • Dirty/clean labcoat (I kid you not)
  • Handling of the rats on arrival from vendor
  • Fire-alarm
  • Cage-change day
  • Minor rat illness
  • Location of operant box in the room (floor vs ceiling, near door or away)
  • Ambient temperature of vivarium or test room
  • Schedule- weekends off? seven days a week?
  • Schedule- 1 hr? 2hr? 6 hr? access sessions
  • Schedule- are reinforcer deliveries contingent upon one lever press? five? does the requirement progressively increase with each successive infusion?
  • Animal loss from the study for various reasons

As you might expect, these factors interact with each other in the real world of conducting science. Some factors you can eliminate, some you have to work around and some you just have to accept as contributions to variability. Your choices depend, in many ways, on your scientific goals beyond merely establishing the IVSA of cocaine.

Up to this point I'm in seeming agreement with that anti-replication yahoo, am I not? Jason Mitchell definitely agrees with me that there are a multitude of ways to come up with a null result.

I am not agreeing with his larger point. In fact, quite the contrary.

The point I am making is that we only know this stuff because of attempts to replicate! Many of these attempts were null and/or might be viewed as a failure to replicate some study that existed prior to the discovery that Factor X was actually pretty important.

Replication attempts taught the field more about the model, which allowed investigators of diverse interests to learn more about cocaine abuse and, indeed, drug abuse generally.

The heavy lifting in discovering the variables and outcomes related to rat IVSA of cocaine took place long before I entered graduate school. Consequently, I really can't speak to whether investigators felt that their integrity was impugned when another study seemed to question their own work. I can't speak to how many "failure to replicate" studies were discussed at conferences and less formal interactions. But given what I do know about science, I am confident that there was a little bit of everything. Probably some accusations of faking data popped up now and again. Some investigators no doubt were considered generally incompetent and others were revered (sometimes unjustifiably). No doubt. Some failures to replicate were based on ignorance or incompetence...and some were valid findings which altered the way the field looked upon prior results.

Ultimately the result was a good one. The rat IVSA model of cocaine use has proved useful to understand the neurobiology of addiction.

The incremental, halting, back and forth methodological steps along the path of scientific exploration were necessary for lasting advance. Such processes continue to be necessary in many, many other aspects of science.

Replication is not an insult. It is not worthless or a-scientific.

Replication is the very lifeblood of science.

__
*rats are nocturnal. check out how many studies**, including behavioral ones, are run in the light cycle of the animal.

**yes to this very day, although they are certainly less common now

20 responses so far

Being as wrong as can be on the so-called replication crisis of science

(by drugmonkey) Jul 07 2014

I am no fan of the hysterical hand wringing about some alleged "crisis" of science whereby the small minded and Glam-blinded insist that most science is not replicable.

Oh, don't get me wrong. I think replication of a prior result is the only way we really know what is most likely to be what. I am a huge fan of the incremental advance of knowledge built on prior work.

The thing is, I believe that this occurs down in the trenches where real science is conducted.

Most of the specific complaining that I hear about failures to replicate studies is focused on 1) Pharma companies trying to cherry pick intellectual property off the latest Science, Nature or Cell paper and 2) experimental psychology stuff that is super truthy.

With regard to the former, cry me a river. Publication in the highest echelons of journals, publication of a "first" discovery/demonstration of some phenomenon is, by design, very likely not easily replicated. It is likely to be a false alarm (and therefore wrong) and it is likely to be much less generalizable than hoped (and therefore not "wrong" but definitely not of use to Pharma vultures). I am not bothered by Pharma execs who wish that public funded labs would do more advancing of intellectual property and serve it up to them part way down the traditional company pipeline. Screw them.

Psych studies. Aaah, yes. They have a strong tradition of replication to rely upon. Perhaps they have fallen by the wayside in recent decades? Become seduced to the dark side? No matter. Let us return to our past, eh? Where papers in the most revered Experimental Psychology journals required several replications within a single paper. Each "Experiment" constituting a minor tweak on the other ones. Each paper firmly grounded in the extant literature with no excuses for shitty scholarship and ignoring inconvenient papers. If there is a problem in Psych, there is no excuse because they have an older tradition. Or possibly some of the lesser Experimental Psychology sects (like Cognitive and Social) need to talk to the Second Sect (aka Behaviorism).

In either of these situations, we must admit that replication is hard. It may take some work. It may take some experimental tweaking. Heck, you might spend years trying to figure out what is replicable / generalizable, what relies upon very ....specific experimental conditions and what is likely to have been a false alarm. And let us admit that in the competitive arena of academic science, we are often more motivated by productivity than we are solving some esoteric problem that is nagging the back of our minds. So we give up.

So yeah, sometimes practicalities (like grant money. You didn't seriously think I'd write a post without mentioning that, did you?) prevent a thorough run at a replication. One try simply isn't enough. And that is not a GoodThing, even if it is current reality. I get this.

But....

Some guy has written a screed against the replication fervor that is actually against replication itself. It is breathtaking.

All you need to hook your attention is conveniently placed as a bullet point pre-amble:

· Recent hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value.
· Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way. Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them.
· Three standard rejoinders to this critique are considered and rejected. Despite claims to the contrary, failed replications do not provide meaningful information if they closely follow original methodology; they do not necessarily identify effects that may be too small or flimsy to be worth studying; and they cannot contribute to a cumulative understanding of scientific phenomena.
· Replication efforts appear to reflect strong prior expectations that published findings are not reliable, and as such, do not constitute scientific output.

· The field of social psychology can be improved, but not by the publication of negative findings. Experimenters should be encouraged to restrict their "degrees of freedom," for example, by specifying designs in advance.

· Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators’ extraordinary claims.

Seriously, go read this dog.

This part seals it for me.

So we should take note when the targets of replication efforts complain about how they are being treated. These are people who have thrived in a profession that alternates between quiet rejection and blistering criticism, and who have held up admirably under the weight of earlier scientific challenges. They are not crybabies. What they are is justifiably upset at having their integrity questioned.

This is just so dang wrong. Trying to replicate another paper's effects is a compliment! Failing to do so is not an attack on the authors' "integrity". It is how science advances. And, I dunno, maybe this guy is revealing something about how he thinks about other scientists? If so, it is totally foreign to me. I left behind the stupid game of who is "brilliant" and who is "stupid" long ago. You know, when I was leaving my adolescent arrogance (of which I had plenty) behind. Particularly in the experimental sciences, what matters is designing good studies, generating data, interpreting data and communicating that finding as best one can. One will stumble during this process...if it were easy it wouldn't be science. We are wrong on a near-weekly basis. Given this day to day reality, we're going to be spectacularly wrong on the scale of an entire paper every once in awhile.

This is no knock on someone's "integrity".

Trying to prevent* anyone from replicating your work, however, IS a knock on integrity.

On the scientific integrity of that person who does not wish anyone to try to replicate his or her work, that is.

__
*whether this be by blocking publication via reviewer or editorial power/influence, torpedoing a grant proposal, interfering with hiring and promotion or by squelching intrepid grad students and postdoctoral trainees in your own lab who can't replicate "The Effect".

26 responses so far

Repost: A nonpology for my Glamour hatred. And for PP.

(by drugmonkey) Jul 03 2014

I wrote this awhile ago. Seems worth reposting for new readers:

I really should apologize to my readers who get their feelings hurt when 1) I bash GlamourMag science and 2) CPP bashes society journal level science. I just couldn't figure out how to make it something other than a nonpology. So the nonpology version is, sorry dudes, sorry that your feelings are hurt if there is some implication that you are a trivial fame-chasing, probably data faking GlamourHound. also, if the ranting that I trigger from certain commenters has the effect of making you feel as though you are a trivial, meaningless speedbump who is wasting NIH dollars better spent on RealScientists who do RealGrandeWorkEleven. The fact is, CPP and I are in relatively comfortable situations compared with many of our readers. It is no secret that we have jobs and grant funding. Although it is true that both of us are not above making an exaggerated point for dramatic discussion-encouraging purposes, it is probably no surprise that we come from distinctly different points of view ForRealz on this particular issue. Speaking only for myself in this case, I've been around long enough and enjoyed enough of what I consider to be success in what I want to do as a scientist that it tends to insulate me against criticism. I get that this is not true for all of you. If my intent in raising these issues (i.e., to show that the dominant meme is not reflective of the only way to have a career) backfires for some of you, I do regret that.

One response so far

Brief Reader query

(by drugmonkey) Jul 03 2014

Do you ever click on the category links at the top of a post to read other items that I have placed in that category?

If so is it helpful?

11 responses so far

Scientific peer review is not broken, but your Glamour humping ways are

(by drugmonkey) Jul 03 2014

I have recently had a not-atypical publishing experience for me. Submitted a manuscript, got a set of comments back in about four weeks. Comments were informed, pointed a finger at some weak points in the paper but did not go all nonlinear about what else they'd like to see or whinge about mechanism or talk about how I could really increase the "likely impact". The AE gave a decision of minor revisions. We made them and resubmitted. The AE accepted the paper.

Boom.

The manuscript had been previously rejected from somewhere else. And we'd revised the manuscript according to those prior comments as best we could. I assume that made the subsequent submission go smoother but it is not impossible we simply would have received major revisions for the original version.

Either way, the process went as I think it should.

This brings me around to the folks who think that peer review of manuscripts is irretrievably broken and needs to be replaced with something NEW!!!!11!!!.

Try working in the normal scientific world for awhile. Say, four years. Submit to regular journals edited by actual working peer scientists. ONLY. Submit to journals of pedestrian and/or unimpressive Impact Factor (that would be the 2-4 range from my frame of reference). Submit interesting stories- whether they are "complete" or "demonstrate mechanism" or any of that bullshit. Then submit the next part of the continuing story you are working on. Repeat.

Oh, and make sure to submit to journals that don't require any page charge. Don't worry, they exist.

Give your trainees plenty of opportunity to be first author. Give them lots of experience writing and allow them to put their own thoughts into the paper..after all, there will be many more of them to go around.

See how the process works. Evaluate the quality of review. Decide whether your science has been helped or hindered by doing this.

Then revisit your prior complaints about how peer review is broken.

And figure out just how many of them have more to do with your own Glamour humping ways than they do with anything about the structure of Editor managed peer-review of scientific manuscripts.

__
Also see Post-publication peer review and preprint fans

16 responses so far

Medical marijuana "researcher" fired by U of A

(by drugmonkey) Jul 02 2014

From the LA Times:

The University of Arizona has abruptly fired a prominent marijuana researcher who only months ago received rare approval from federal drug officials to study the effects of pot on patients suffering from post traumatic stress disorder.

The firing of Suzanne A. Sisley, a clinical assistant professor of psychiatry, puts her research in jeopardy and has sparked indignation from medical marijuana advocates.

I bet. Interestingly I see no evidence on PubMed that this Sisley person has any expertise in conducting research at all. I'm not saying I need exhaustive credentials but I'd like to see a published study or two.

Cue the usual raving about how this is all a vast right wing conspiracy to keep down miraculous medication...

Sisley charges she was fired after her research – and her personal political crusading – created unwanted attention for the university from legislative Republicans who control its purse strings.

“This is a clear political retaliation for the advocacy and education I have been providing the public and lawmakers,” Sisley said. “I pulled all my evaluations and this is not about my job performance.”

Well, this IS Arizona we're talking about. I'm going to want to see more* but I guess I am going to have to score myself as sympathetic to the notion that this was a political squelching.
Still, the University is denying the charge...

University officials declined to explain why Sisley’s contract was not renewed, but objected to her characterization.

“The university has received no political pressure to terminate any employee,” said Chris Sigurdson, a university spokesman. He said the university embraces research of medical marijuana, noting that it supported a legislative measure in 2013 permitting such studies to be done on state campuses.

Ok, "embraces", eh? We'll see if that turns out to be true.

__
h/t: clbs

*if this holds true to form the University will be compelled to make a case for how she wasn't competent at the "clinical assistant professor" category of association with U of A.

4 responses so far

PSA: Keep your age assumptions about PIs to yourowndamnself

(by drugmonkey) Jul 01 2014

I realize this is not news to most of you. But the Twitts are aTwitt today about the way youthful appearing faculty are treated by.....everyone.

From undergrads to grads to postdocs to faculty and administration there is a perception of what a Professor looks like.

And generally that perception means "old". See Figure 1.

Google Image Search for "Professor"

Figure 1: Google Image Search for "Professor"

So if you look in some way too young for the expectation, junior faculty are occasionally mistaken for postdocs or grad students.

This effect has a profound sex bias, of course, which is why I'm bringing it up.

Women are much more likely to report being confused for nonfaculty.

This has all sorts of knock on bad effects including how seriously their peers take them as scientists and peers, their own imposter syndrome battles and their relationships with trainees.

My request to you, if you have not considered such issues, is to just remember to check yourself. When in doubt at a poster session or academic social event, assume the person might be faculty until and unless they clue you in otherwise by what they say. Hint: When they say "my boss" or "my PI" or "my mentor" then it is okay to assume the person is a trainee. If they say "my lab" and don't further qualify then it is best to assume they are the head.

In most cases, it simply isn't necessary for you to question the person AT ALL about "who they work for".

I have only two or three experiences in my career related to this topic, as one would expect being that I present pretty overtly as male. They all came fairly early on when I was in my early thirties.

One greybeard at a poster session (at a highly greybearded and bluehaired meeting, admittedly) was absolutely insistent about asking who's lab it "really" was. I was mostly bemused because I'm arrogant and what not and I thought "Who IS this old fool?". I think I had ordered authors on the poster with me first and my trainees and/or techs in following order and this old goat actually asked something about whether it was the last author's (my tech) lab.

There were also a mere handful of times in which people's visual reaction on meeting me made it clear that I violated their expectations based on, I guess, knowing my papers. Several of these were situations in which the person immediately or thereafter admitted they were startled by how young I was.

As I said, I present as male and this is basically the expected value. Men don't get the queries and assumptions quite so much.

One final (and hilarious) flip side. I happened to have a couple of posters in a single session at a meeting once upon a time, and my postdoctoral PI was around. At one of my posters this postdoc advisor was actually asked "Didn't you use to work with [YHN]?" in the sort of tone that made it clear the person assumed I had been the PI and my advisor the trainee.

Guess what gender this advisor is?

49 responses so far

Strategies for your #A2asA0 Resubmissions

(by drugmonkey) Jun 30 2014

A query came into the blog email box about how to deal with submitting a new grant based on the prior A1 that did not get funded. As you know, NIH banned any additional revisions past the A1 stage back in 2009. Recently, they have decided to stop scrutinizing "new" applications for similarity with previously reviewed and not-funded applications. This is all well and good but how should we go about constructing the "new" grant, eh? A query from a Reader:

Do you use part of your background section to address reviewer comments? You're not allowed to have an introduction to the application, but as far as I can tell there is no prohibition on using other parts of the application as a response to reviewers.

I could see the study section as viewing this a) innovative, b) a sneaky attempt to get around the rules, c) both a and b.

I am uncertain about the phrasing of the Notice where it says "must not contain an introduction to respond to the critiques from the previous review". In context I certainly read this as prohibiting the extra page that you get for an amended application. What is less clear is whether this is prohibiting anything that amounts to such introduction if you place it in the Research Strategy. I suspect you could probably get away with direct quotes of reviewer criticisms.

This seems unwise to me, however. I think you should simply take the criticisms and revise your proposal accordingly as you would in the case of an amended version. These revisions will be sprinkled throughout the application as appropriate- maybe a change in the Significance argument, maybe a new Experiment in Aim 2, maybe a more elaborated discussion of Potential Pitfalls and Alternative Approaches.

Given the comments, perhaps you might need to state some things twice or set off key points in bold type. Just so the next set of reviewers don't miss your point.

But I see no profit in directly quoting the prior review and it just wastes space.

10 responses so far

« Newer posts Older posts »