Archive for the 'Conduct of Science' category

High school email-an-expert projects: Respond or ignore?

I have been experiencing a sharp uptick in high school projects that are apparently titled: "Email questions to some random expert on the internet" lately.

Is anyone else getting these?

Do you respond? In what depth?

17 responses so far

Sydney Brenner on the trainee exploitation scam of science.

In this interview, Nobel Laureate Brenner says:

Today the Americans have developed a new culture in science based on the slavery of graduate students. Now graduate students of American institutions are afraid. He just performs. He’s got to perform. The post-doc is an indentured labourer. We now have labs that don’t work in the same way as the early labs where people were independent, where they could have their own ideas and could pursue them.

The most important thing today is for young people to take responsibility, to actually know how to formulate an idea and how to work on it. Not to buy into the so-called apprenticeship. I think you can only foster that by having sort of deviant studies. That is, you go on and do something really different. Then I think you will be able to foster it.

But today there is no way to do this without money. That’s the difficulty. In order to do science you have to have it supported. The supporters now, the bureaucrats of science, do not wish to take any risks. So in order to get it supported, they want to know from the start that it will work. This means you have to have preliminary information, which means that you are bound to follow the straight and narrow.

 

I saw some comment that he was bashing peer review but if you look carefully, you'll see he's talking about the GlamourGame with professional, not-working-scientist, editors:

I think peer review is hindering science. In fact, I think it has become a completely corrupt system. It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists. There are universities in America, and I’ve heard from many committees, that we won’t consider people’s publications in low impact factor journals.

...

In other words it puts the judgment in the hands of people who really have no reason to exercise judgment at all.

18 responses so far

Put up or shut up time, all ye OpenSciencePostPubReview Waccaloons!

Oct 24 2013 Published by under Academics, Conduct of Science, Open Access, Peer Review

PubMed Commons has finally incorporated a comment feature.

NCBI has released a pilot version of a new service in PubMed that allows researchers to post comments on individual PubMed abstracts. Called PubMed Commons, this service is an initiative of the NIH leadership in response to repeated requests by the scientific community for such a forum to be part of PubMed. We hope that PubMed Commons will leverage the social power of the internet to encourage constructive criticism and high quality discussions of scientific issues that will both enhance understanding and provide new avenues of collaboration within the community.

This is described as being in beta test version and for now is only open to authors of articles already listed in PubMed, so far as I can tell.

Perhaps not as Open as some would wish but it is a pretty good start.

I cannot WAIT to see how this shakes out.

The Open-Everything, RetractionWatch, ReplicationEleventy, PeerReviewFailz, etc acolytes of various strains would have us believe that this is the way to save all of science.

This step of PubMed brings the online commenting to the best place, i.e., where everyone searches out the papers, instead of the commercially beneficial place. It will link, I presume, the commentary to the openly-available PMC version once the 12 month embargo elapses for each paper. All in all, a good place for this to occur.

I will be eager to see if there is any adoption of commenting, to see the type of comments that are offered and to assess whether certain kinds of papers get more commentary than do others. All and all this is going to be a neat little experiment for the conduct-of-science geeks to observe.

I recommend you sign up as soon as possible. I'm sure the devout and TrueBelievers would beg you to make a comment on a paper yourself so, sure, go and comment on some paper.

You can search out commented papers with this string, apparently.
has_user_comments[sb]

In case you are interested in seeing what sorts of comments are being made.

25 responses so far

On showing the data

Sep 05 2013 Published by under Careerism, Conduct of Science, NIH, NIH Careerism

If I could boil things down to my most fundamental criticism of the highly competitive chase for the "get" of a very high Impact Factor journal acceptance in science, it is the inefficiency.

GlamourDouchery of this type is an inefficient way to do science.

This is because of several factors related to the fundamental fact that if the science you conduct isn't published it may as well never have happened.

Science is an incremental business, ever built upon the foundations and structures created by those who came before. And built in sometimes friendly, sometimes uneasy collaboration with peers. No science stands alone.

Science these days is also a very large enterprise with many, many thousands of people beavering away at various topics. It is nearly impossible to think of a research program or project that doesn't benefit by the existence of peer labs doing somewhat-related work.

Consequently, it is a near truism that all science benefits from the quickest and comprehensive knowledge of what other folks are doing.

The "get" of an extremely high Impact Factor Journal article acceptance requires that the authors, editors and reviewers temporarily suspend disbelief and engage in the mass fantasy that this is not the case. The participants engage in the fantasy that this work under consideration is the first and best and highly original. That it builds so fundamentally different an edifice that the vast majority of the credit adheres to the authors and not to any part of the edifice of science upon which they are building.

This means that the prospective GlamourArticle authors are highly motivated to keep a enormous amount of their progress under wraps until they are ready to reveal this new fine stand-alone structure.

Otherwise, someone else might copy them. Leverage their clever advances. Build a competing tower right next door and overshadow any neighboring accomplishments. Which, of course, builds the city faster....but it sure doesn't give the original team as much credit.

The average Glamour Article is also an ENORMOUS amount of work. Many, many person years go into creating one. Many people who would otherwise get a decent amount of credit for laying a straight and true foundation will now be entirely overlooked in the aura of the completed master work. They will never become architects themselves, of course. How could they? Even if they travel to Society Journal Burg, there is no record of them being the one to detail the windows or come up with a brilliant new way to mix the mortar. That was just scut work for throwaway labor, don't you know.

But the real problem is that the collaborative process between builders is hindered. Slowed for years. The dissemination of tools and approaches has to wait until the entire tower is revealed.

Inefficiency. Slowness. These are the concerns.

Sure, it is also a problem that the builders of the average Glamour Article tower may not share all their work even after the shroud has been removed. It would be nice to let everyone know just where the granite was found, how it was quarried and something about the brand new amazing mortar that (who was that anonymous laborer again? shrug) created. But there isn't really any pay for that and the original team has moved on. Good luck. So yes, it would be good to require them to show their work at the end.

Much, much more important, however, is that they show each part of the tower as it is being created. I mean, no, I don't think people need to work with a hundred eyes tracking their every move. I don't think every little mistake has to be revealed, nor do I think we necessarily need to know how each laborer holds her trowel. But it would be nice to show off the foundation when it is built. To reveal the clever staircase and the detailing around the windows once they are installed. Then each sub-team can get their day in the sun. Get the recognition they deserve.

[And if they are feeling a little oppressed, screw it, they can leave and take their credit with them. And their advances in knowledge can be spread to another town who will be happy to hire this credentialed foundation builder instead of some grumpy nobody who only claims to have built a foundation.]

The competition for Glamour Article building can't really catch up directly, after all it takes a good bit of work to lay a foundation or create a new window design. They can copy techniques and leverage them, but there is less chance of an out and out scoop of the full project.

So if the real problem is inefficiency, Dear Reader, the solution is most assuredly the incremental reveal of progress made. We don't need to watch the stirring and the endless recipes for mortar that have been attempted, we just need to know how the successful one was made. And to see the sections of the tower as they are completed.

Ironically enough, this is how it is done outside of GlamourCity. In Normalville, the builders do show their work. Not all of it in nauseating detail but incrementally. Sections are shown as they are completed. It is not necessary to wait for the roof to be laid to show the novel floorplan or for the paint to be on to show the craft that went into the floor joists.

This is a much more efficient way to advance.

It has to be, since resources are scarce and people in Society Burgh kind of give a shit if one of their neighbors is crushed under a block of limestone. And care if an improperly supported beam cracks and they have to get a new one.

This is unlike the approach of Glamour City where they just curse the laborer and draft three new ones to lift the block into place. And pull another beam out of their unending pile of lumber.

63 responses so far

Show me the data, Jerry!!!!!!

Sep 03 2013 Published by under #FWDAOTI, Conduct of Science, Tribe of Science

Today's Twittsplosion was brought to you by @mbeisen:

he then elaborated

and

and

There was a great deal of distraction in there from YHN, MBE and the Twitteratti. But these are the ones that get at the issue I was responding to. I think the last one here shows that I was basically correct about what he meant at the outset.

I also agree that it would be GREAT if all authors of papers had deposited all of their raw data, carefully annotated, commented and described (curated, in a word) with all of the things that I might eventually want to know. That would be kickass.

And I have had NUMEROUS frustrations that I cannot tell even from methods sections what was done, how the data were selected and groomed, etc in many critical papers.

It isn't because I assume fraud but rather that I find that when it comes to behaving animals in laboratory studies that details matter. Unfortunately we all wish to overgeneralize from published reports....the authors want to imply they have reported a most universal TRUTH and other investigators wish to believe it so that they don't have to sweat the details.

This is never true in science, as much as we want to pretend.

Science is ever only a description of what has occurred under these specific conditions. Period. Including the ones we've bothered to describe in the Methods and those we have not bothered to describe. Including those conditions of which we have no knowledge or understanding that they might have contributed.

Let us take our usual behavioral pharmacology model, the 10 m Hedgerow BunnyHopper assay. The gold standard, of course. And everyone knows it is trivial to speed up the BunnyHopping with a pretreatment of amphetamine.

However, we've learned over the years that the time of day matters.

Until...finally....in its dotage seniority. The Dash Lab finally fesses up. The PI allows a trainee to publish the warts. And compare the basic findings, done at nighttime in naive bunnies, with what you get during the dawn/dusk period. In Bunnies who have seen the Dash arena before. And maybe they are hungry for clover now. And they've had a whiff of fox without seeing the little blighters before.

And it turns out these minor methodological changes actually matter.

We also know that dose response curves can be individual for amphetamine and if the dose is too high the Bunny just stims (and gets eaten by the fox). Perhaps this dose threshold is not identical so we're just going to chop off the highest dose because half of them were eaten after that dose. Wait...individuals? Why can't we show the individuals? Because maybe a quarter are speeded up by 4X and a quarter by 10X and now that there are these new genetic data on Bunny myocytes under stressors as diverse as....

So why do the new papers just report the effects of single doses of amphetamine in the context of this fancy transcranial activation of vector-delivered Channelrhodopsin in motor cortex? Where are the training data? What time of day were they run? How many Bunnies were aced out of the study because the ReaChr expression was too low? I want to do a correlation, dammit! and a multivariate analysis that includes my favorite myocyte epigenetic markers! Say, how come these damn authors aren't required to bank genomic DNA from every damn animal they run just so I can ask for it and do a whole new analysis?

After all, the taxpayers paid for it!

I can go on, and on and on with arguments for what "raw" data need to be included in all BunnyHopping papers from now into eternity. Just so that I can perform my pet analyses of interest.

The time and cost and sheer effort involved is of no consequence because of course it is magically unicorn fairy free time that makes it happen. Also, there would never be any such thing as a protracted argument with people who simply prefer the BadgerDigger assay and have wanted to hate on BunnyHopping since the 70s. Naaah. One would never get bogged down in irrelevant stuff better suited for review articles by such a thing. Never would one have to re-describe why this was actually the normal distribution of individual Hopping speeds and deltas with amphetamine.

What is most important here is that all scientists focus on the part of their assays and data that I am interested in.

Just in case I read their paper and want to write another one from their data.

Without crediting them, of course. Any such requirement is, frankly my dear, gauche.

42 responses so far

The naked chutzpah and hypocrisy of an AR wackaloon

Aug 16 2013 Published by under Animals in Research, Conduct of Science

There's a new post up over at Speaking of Research that documents The Double Life of Dr. Lawrence A. Hansen. The most astonishing thing is that this AR wackanut has the gall to hold research funding as PI and publish papers that, you guessed it, involve animal research. Including a study in "mongrel dogs" [cited 21 times including twice in 2012] which he first authored some ten years before hitting the scene in outrage over med school physiology labs which used canine models.

Go Read.

32 responses so far

23andme and the Cold Case

By way of brief introduction, I last discussed the 23andme genetic screening service in the context of their belated adoption of IRB oversight and interloper paternity rates. You may also be interested in Ed Yong's (or his euro-caucasoid doppelganger's) results.

Today's topic is brought to you by a comment from my closest collaborator on a fascinating low-N developmental biology project.

This collaborator raised a point that extends from my prior comment on the paternity post.

But, and here's the rub, the information propagates. Let's assume there is a mother who knows she had an affair that produced the kid or a father who impregnated someone unknown to his current family. Along comes the 23 and me contact to their child? Grandchild? Niece or nephew? Brother or sister? And some stranger asks them, gee, do you have a relative with these approximate racial characteristics, of approximately such and such age, who was in City or State circa 19blahdeblah? And then this person blast emails their family about it? or posts it on Facebook?

It also connects with a number of issues raised by the fact that 23andme markets to adoptees in search of their genetic relatives. This service is being used by genealogy buffs of all stripes and one can not help but observe that one of the more ethically complicated results will be the identification of unknown genetic relationships. As I alluded to above, interloper paternity may be identified. Also, one may find out that a relative gave a child up for adoption...or that one fathered a child in the past and was never informed.

That's all very interesting but today's topic relates to crimes in which DNA evidence has been left behind. At present, so far as I understand, the DNA matching is to people who have already crossed the law enforcement threshold. In fact there was a recent broughha over just what sort of "crossing" of the law enforcement threshold should permit the cops to take your DNA if I am not mistaken. This does not good, however, if the criminal has never come to the attention of law enforcement.

Ahhhh, but what if the cops could match the DNA sample left behind by the perpetrator to a much larger database. And find a first or second cousin or something? This would tremendously narrow the investigation, wouldn't it?

It looks like 23andme is all set to roll over for whichever enterprising police department decides to try.

From the Terms of Service.

Further, you acknowledge and agree that 23andMe is free to preserve and disclose any and all Personal Information to law enforcement agencies or others if required to do so by law or in the good faith belief that such preservation or disclosure is reasonably necessary to: (a) comply with legal process (such as a judicial proceeding, court order, or government inquiry) or obligations that 23andMe may owe pursuant to ethical and other professional rules, laws, and regulations; (b) enforce the 23andMe TOS; (c) respond to claims that any content violates the rights of third parties; or (d) protect the rights, property, or personal safety of 23andMe, its employees, its users, its clients, and the public.

Looks to me that all the cops would need is a warrant. Easy peasy.

__
h/t to Ginny Hughes [Only Human blog] for cuing me to look over the 23andme ToS recently.

18 responses so far

F1000Research wants your negative results!

The F1000Research will be waiving the publication fee for negative result manuscripts up through the end of August.


If you have negative results in your lab notebooks, this is the time to write them up! Like all journals, we of course publish traditional full-length research papers but, in addition, we accept short single-observation articles, data articles (i.e. a dataset plus protocol), and negative- and null-result submissions.

For negative and null results, it is especially important to ensure that the outcome is a genuine finding generated by a well executed experiment, and not simply the result of poorly conducted work. We have been talking to our Editorial Board about how to try to avoid the publication of the latter type of result and will be addressing this topic and asking for your input in a further post in the next few days.

The follow up post requesting comment is here.

This is a great idea and the original post nails down why.

This is not only a disappointment for the researchers who conducted the work, it’s also damaging to the overall scientific record. This so-called “publication bias” toward positive results makes it appear as though the experiments with negative or null results never happened.

Sometimes the unpublished experiments are obvious next steps in elucidating a particular biological mechanism, making it likely that other researchers will try the same thing, not realizing that someone else already did the work. This is a waste of time and money.

On other occasions, the positive results that are published are the exception: they could have been specific to a narrow set of conditions, but if all the experiments that didn’t work are not shown, these exceptional cases now look like the only possible result. This is especially damaging when it comes to drug development and medical research, where treatments may be developed based on an incomplete understanding of research results.

The waste of time and money cannot be emphasized enough, especially in these tight funding times. Why on earth should we tolerate any duplication of effort that is made necessary simply by the culture of not publicizing results that are not deemed sexy enough? This is the information age, people!

One example from my field is the self-administration of delta9-tetrahydrocannabinol (THC) by the common laboratory species used for self-administration studies of other drugs of abuse. Papers by Goldberg and colleagues (Tanda et al, 2000; Justinova et al, 2003) showed that squirrel monkeys will self-administer THC intravenously which was big news. It was the first relatively clear demonstration in lab animals for a substance we know humans readily self-administer. As the Goldberg group related in their 2005 review article, there is no clear evidence that rodents will self-administer THC i.v. in literature stretching back to the 1970s when the self-administration technique was being used for studies of numerous drugs.

Over the last three decades, many attempts to demonstrate intravenous self-administration of THC or of synthetic cannabinoid CB1 receptor agonists by experimental animals were relatively unsuccessful (Pickens et al., 1973; Kaymakcalan, 1973; Harris et al., 1974; Carney et al., 1977; van Ree et al., 1978; Mansbach et al., 1994) (Table 1). None of these studies clearly demonstrated persistent, dose-related, self-administration behavior maintained by THC or synthetic cannabinoids, which would be susceptible to vehicle extinction and subsequent reinstatement in the absence of unusual ‘‘foreign’’ conditions.

The thing is that rats "wouldn't" self-administer nicotine either. Nor alcohol. That is, until people came up with the right conditions to create a useful model. In the case of ethanol it was helpful to either force them to become dependent first (via forced liquid diets adulterated with ethanol or ethanol inhalation chambers) or to slowly train them up on cocktails (called the flavorant-fade procedure). In the case of nicotine, the per-infusion dose was all critical and it helped to provide intermittent access, e.g., with four days on, three days off. Interestingly, while making rats dependent on nicotine using subcutaneous osmotic pumps didn't work (as it does for heroin) very well, a recent study suggests that force inhalation-based dependence on nicotine results in robust intravenous self-administration.

For many drugs of abuse, subtle factors can make a difference in the rodent model. Strain, sex, presence of food restriction, exact age of animals, circadian factors, per-infusion dose, route of administration, duration of access, scheduling of access.... the list goes on and on. A fair read of the literature suggests that when you have cocaine or heroin, many factors have only quantitative effects. You can move the means around, even to the p<0.05 level, but hey, it's cocaine or heroin! They'll still exhibit clear evidence that they like the drug.

When it comes to other drugs, maybe it is a little trickier. The balance between pleasurable and aversive effects may be a fine one (ever tried buccal nicotine delivery via chew or dip? huh?). The route of administration may be much more critical. Etc.

So the curious person might ask, how much has been tried? How many curious grad students or even postdocs have "just tried it" for a few months or a year? How many have done the most obvious manipulations and failed? How many have been told to give it up as a bad lot by older and wiser PIs (who tried to get THC self-administration going themselves back 20 years ago)?

I'm here to tell you that it has been attempted a lot more than has been published. Because the lab lore type of advice keeps rolling.

It is really hard, however, to get a comprehensive look at what has been tried and has led to failure. What were the quality of those attempts? N=8 and out? Or did some poor sucker run multiple groups with different infusion doses? Across the past thirty years, how many of the obvious tweaks have been unsuccessful?

Who cares, right? Well, my read is that there are some questions that keep coming around, sometimes with increased urgency. The current era of medical marijuana legalization and tip-toeing into full legalization means that we're under some additional pressure to have scientific models. The explosion of full-agonist cannabimimetic products (K2, Spice, Spike, etc containing JWH-018 at first and now a diversity of compounds) likewise rekindles interest. Proposals that higher-THC marijuana strains increase dependence and abuse could stand some controlled testing....if we only had better models.

Well, this is but one example. I have others from the subfields of science that are of my closest interests. I think it likely that you, Dear Reader, if you are a scientist can come up with examples from your own fields where the ready availability of all the failed studies would be useful.

10 responses so far

A Note for the IDC Warriors and CrowdFunders Alike

Apr 10 2013 Published by under Conduct of Science, CrowdFund

intrepid reporter @eperlste filed a dispatch from the front lines of the OpenScience, CrowdFund War.

I've reached out to several @qb3 incubator biotech startups to learn more about leasing lab space. $900/bench/month is a pretty penny!

$10,800 per year just for the bench space alone. One bench. He didn't elaborate so it is hard to know what is included, but I think we can safely assume that normal costs go up from there. Freezer space, hourly use of shared big-ticket equipment, etc. Vivarium fees to maintain mouse lines won't come cheaply. Waste disposal.

Just another data point for you in your efforts to assess what can reasonably be accomplished for a given threshold of crowd-fund science support money and in determining where your Indirect Cost dollars for a traditional grant go.

23 responses so far

Jane Goodall, Plagiarist

From the WaPo article:

Jane Goodall, the primatologist celebrated for her meticulous studies of chimps in the wild, is releasing a book next month on the plant world that contains at least a dozen passages borrowed without attribution, or footnotes, from a variety of Web sites.

Looks pretty bad.

This bit from one Michael Moynihan at The Daily Beast raises the more interesting issues:

No one wants to criticize Jane Goodall—Dame Goodall—the soft-spoken, white-haired doyenne of primatology. She cares deeply about animals and the health of the planet. How could one object to that?

Because it leads her to oppose animal research using misrepresentation and lies? That's one reason why one might object.

You see, everyone is willing to forgive Jane Goodall. When it was revealed last week in The Washington Post that Goodall’s latest book, Seeds of Hope, a fluffy treatise on plant life, contained passages that were “borrowed” from other authors, the reaction was surprisingly muted.

It always starts out that way for a beloved writer. We'll just have to see how things progress. Going by recent events it will take more guns a'smokin' in her prior works to start up a real hue and cry. At the moment, her thin mea culpa will very likely be sufficient.

A Jane Goodall Institute spokesman told The Guardian that the whole episode was being “blown out of proportion” and that Goodall was “heavily involved” in the book bearing her name and does “a vast amount of her own writing.” In a statement, Goodall said that the copying was “unintentional,” despite the large amount of “borrowing” she engaged in.

Moynihan continues on to catalog additional suspicious passages. I think some of them probably need a skeptical eye. For example I am quite willing to believe a source might give the exact same pithy line about a particular issue to a number of interviewers. But this caught my eye:

Describing a study of genetically modified corn, Goodall writes: “A Cornell University study showed adverse effects of transgenic pollen (from Bt corn) on monarch butterflies: their caterpillars reared on milkweed leaves dusted with Bt corn pollen ate less, grew more slowly, and suffered higher mortality.”

A report from Navdaya.org puts it this way: “A 1999 Nature study showed adverse effects of transgenic pollen (from Bt corn) on monarch butterflies: butterflies reared on milkweed leaves dusted with bt corn pollen ate less, grew more slowly, and suffered higher mortality.” (Nor does Goodall mention a large number of follow-up studies, which the Pew Charitable Trust describes as showing the risk of GM corn to butterflies as “fairly small, primarily because the larvae are exposed only to low levels of the corn’s pollen in the real-world conditions of the field.”

And here is the real problem. When someone who has a public reputation built on what people think of as science weighs in on other matters of science, they enjoy a lot of trust. Goodall certainly has this. So when such a person misuses this by misrepresenting the science to further their own agenda...it's a larger hurdle for the forces of science and rational analysis to overcome. Moynihan is all over this part as well:

One of the more troubling aspects of Seeds of Hope is Goodall’s embrace of dubious science on genetically modified organisms (GMO). On the website of the Jane Goodall Foundation, readers are told—correctly—that “there is scientific consensus” that climate change is being driven by human activity. But Goodall has little time for scientific consensus on the issue of GMO crops, dedicating the book to those who “dare speak out” against scientific consensus. Indeed, her chapter on the subject is riddled with unsupportable claims backed by dubious studies.

So in some senses the plagiarism is just emblematic of un-serious thinking on the part of Jane Goodall. The lack of attribution is going to be sloughed off with an apology and a re-edit of the book, undoubtedly. We should not let the poor scientific thinking go unchallenged though, just to raise a mob against plagiarism. The abuse of scientific consensus is a far worse transgression.

33 responses so far

Older posts »