Scientopia http://scientopia.org Sun, 29 Mar 2015 11:59:58 +0000 en-US hourly 1 How my personality test tells me to be on twitter http://inbabyattachmode.scientopia.org/2015/03/29/how-my-personality-test-tells-me-to-be-on-twitter/ http://inbabyattachmode.scientopia.org/2015/03/29/how-my-personality-test-tells-me-to-be-on-twitter/#comments Sun, 29 Mar 2015 11:59:58 +0000 http://inbabyattachmode.scientopia.org/?p=231 A while ago I wrote about personality differences, and recently I had the training where we discovered our personal profile. Apparently, I am pretty extravert, and also competitive, enthusiastic, determined and strong-willed. But, the best sentence in my personal profile came from the page describing my blind spots: "This person would much rather engage in quick intellectual banter than complete some mundane task..." Seems like a perfect description of procrastinating on twitter!

]]>
http://inbabyattachmode.scientopia.org/2015/03/29/how-my-personality-test-tells-me-to-be-on-twitter/feed/ 0
But It's Already Been Done! #ExpBio http://whizbang.scientopia.org/2015/03/29/but-its-already-been-done-expbio/ http://whizbang.scientopia.org/2015/03/29/but-its-already-been-done-expbio/#comments Sun, 29 Mar 2015 11:00:28 +0000 http://whizbang.scientopia.org/?p=4052 Despite the proliferation of journals and publications, our basic science efforts often do not pay off in new treatments. Part of this is time; it generally takes 15-20 years for a new science observation to be translated to the bedside.

The uglier part of the story is lack of reproducibility of preclinical studies.

In 2011 a pharmaceutical company published what proportion of preclinical studies could be replicated for drug development:

ReproFail

The bottom line: well under half! Inconsistent effects do not make good treatment prospects.

Saturday’s seesion, Reproducibility in Research: What are the problems? How can we fix them? What happens if we don’t?,   addressed these issues. Sponsored by the Policy Committee, the session brought together several speakers to address the issues.

Perhaps the most intriguing idea involved unconscious bias. This concept receives a lot of attention during discussions of diversity issues. Most of us have been conditioned to see a white male as the default for a professor or a leader. In science, the bigger problem is the way we do it. As humans, we are programmed to find evidence that supports our bias and, perhaps, minimize findings that contradict it. In forming a hypothesis, we develop a bias requiring support.

So what can we do about this unconscious process?

One though is to stop making hypotheses.

Now, before you start screaming about scientific sacrilege, just listen. The idea would be to have multiple potential thoughts about possible outcomes or to just define a scientific question without defining outcomes.

Obviously, eliminating our current well-defined hypothesis testing model will not solve all these issues, especially with such pressure for high-impact publications to get funding and keep jobs. A variety of other ideas came up, but the problem clearly does not have an easy fix.

This great session left us with more questions than answers. What a great way to start the meeting!

]]>
http://whizbang.scientopia.org/2015/03/29/but-its-already-been-done-expbio/feed/ 0
To Sleep: 2015 Cannon Award #ExpBio http://whizbang.scientopia.org/2015/03/29/to-sleep-2015-cannon-award-expbio/ http://whizbang.scientopia.org/2015/03/29/to-sleep-2015-cannon-award-expbio/#comments Sun, 29 Mar 2015 01:33:15 +0000 http://whizbang.scientopia.org/?p=4055 The American Physiological Society portion of Experimental Biology officially opened Saturday evening with the Cannon Lecture. This year's recipient of the award, Masashi Yanagisawa, addressed his work in understanding sleep.

Solving the mystery of sleep: from orphan receptors to forward genetics started with studies of the orexin knock-out mouse. They thought the mouse would have abnormal body weight; however, it ended up being a model of narcolepsy! He then went on to outline neural control pathways for sleep and genetic work in this area (disclaimer: brain stuff confuses me).

The talk outlined some fascinating finds;however, inspite of all of this study, we still do not know why we sleep. What purpose do these hours of unconscious helplessness serve? It must be important for all mammals to enter this vulnerable state every day.

More questions mean more science.

]]>
http://whizbang.scientopia.org/2015/03/29/to-sleep-2015-cannon-award-expbio/feed/ 0
Selected Data and Sources Relevant to Research Enterprise Sustainability http://datahound.scientopia.org/2015/03/28/selected-data-and-sources-relevant-to-research-enterprise-sustainability/ http://datahound.scientopia.org/2015/03/28/selected-data-and-sources-relevant-to-research-enterprise-sustainability/#comments Sat, 28 Mar 2015 00:26:31 +0000 http://datahound.scientopia.org/?p=922 I participated in the follow-up meeting to the Alberts et al. paper that was summarized in a recent PNAS paper. This summary noted that "...most were surprised to learn that the percentage of NIH grant-holders with independent R01 funding who are under the age of 36 has fallen sixfold (from 18% to about 3%) over the past three decades." This statement is probably accurate, but I was disappointed that many participants were not familiar with many important facts and trends that have affected the biomedical enterprise over the past two decades.

What information is important for individuals to know in order to participate in discussions about potential corrections to the present system. In addition to the demographic data noted above, below are some slides that I have used in presentations on the topic of the sustainability of the biomedical research enterprise (some of which are derived from posts here or from my columns at ASBMB Today.

Sustainability-1

Slide01

Slide02

Slide03

Slide04

Slide05

Slide06

Slide07

Slide08

Slide09

Slide10

Slide11

 

Which of these are most important? What are other important data sets or data sources that should be included in such presentations?

]]>
http://datahound.scientopia.org/2015/03/28/selected-data-and-sources-relevant-to-research-enterprise-sustainability/feed/ 0
This is who is leading the fight for your future in science http://drugmonkey.scientopia.org/2015/03/27/this-is-who-is-leading-the-fight-for-your-future-in-science/ http://drugmonkey.scientopia.org/2015/03/27/this-is-who-is-leading-the-fight-for-your-future-in-science/#comments Fri, 27 Mar 2015 18:48:06 +0000 http://drugmonkey.scientopia.org/?p=7751 Tweep @MHendr1ck is killing it. The latest.

The PI R01 age distribution looks like the 2010 one from this PPT file.

The "Jedi Council" is, I believe, the participants in a 2 day workshop convened by Alberts, Kirschner, Tilghman and Varmus as detailed here (see Acknowledgements).

]]>
http://drugmonkey.scientopia.org/2015/03/27/this-is-who-is-leading-the-fight-for-your-future-in-science/feed/ 0
More Age Data from NIH: Surprising Award Rate Data http://datahound.scientopia.org/2015/03/27/more-age-data-from-nih-surprising-award-rate-data/ http://datahound.scientopia.org/2015/03/27/more-age-data-from-nih-surprising-award-rate-data/#comments Fri, 27 Mar 2015 16:15:24 +0000 http://datahound.scientopia.org/?p=910 In the context of recent discussions of NIH age group data, @dgermain21 pointed to some interesting data in a recent NIH report on physician scientists regarding NIH R01 Award Rates as a function of age group (as well as degree, race/ethnicity, and gender. These data are quite surprising as shown below:

Award Rate graph

These data are for all individuals in the analysis. I have omitted the curves for individuals 30 or less and 71+ since these data are relatively noisy, presumably to relatively small numbers of individuals in these groups.

The term Award Rate is defined by NIH as "the number of awards made in a fiscal year divided by the absolute number of applications where we don’t combine resubmissions (A1s) that come in during the same fiscal year." Thus, Award Rate is lower that Success Rate since the denominator is higher.

Of course, the surprising observation is that these rates are highest for the 31-40 age and declines monotonically so that it is lowest for the 61-70 age group. This is certainly counter to what I would have expected where I would have anticipated the opposite trend or perhaps a peak for the 51-60 age group. This observation begs an explanation.

Digging into the report, the Quantitative Analysis Methodology section indicates that

"The NIH awards and time period selected for inclusion in the system from IMPACII (the large internal NIH database) were:

  • Research Project Grants for the following 25 activity codes between 1993 and 2012, Type 1 applications,..."

The term "Type 1" applications refers to new (as opposed to competing renewal) applications. This suggests that the above data may be only for these new applications. Competing renewal applications (Type 2) applications come predominantly from more senior investigators and have substantially higher success rates than new applications. Thus, the restriction to Type 1 applications would be expected improve the importance of younger relative to older investigators. This may be an important contributor to these data, although I still find it surprising that the reported trends still apply to new R01 applications.

Interested readers should look at the report and help try to understand how to interpret these data.

 

]]>
http://datahound.scientopia.org/2015/03/27/more-age-data-from-nih-surprising-award-rate-data/feed/ 0
Grants won vs grants awarded on your CV http://drugmonkey.scientopia.org/2015/03/26/grants-won-vs-grants-awarded-on-your-cv/ http://drugmonkey.scientopia.org/2015/03/26/grants-won-vs-grants-awarded-on-your-cv/#comments Thu, 26 Mar 2015 20:40:41 +0000 http://drugmonkey.scientopia.org/?p=7746 Sometimes you have to turn down something that you sought competitively.

Undergrad or graduate school admission offers. Job offers. Fellowships.

Occasionally, research support grants.

Do you list these things on your CV? I can see the temptation.

If you view your CV as being about competitive accolades. But we don't do that. In academics your CV is a record of what you have done. Which undergraduate University conferred a degree upon you. Which place granted your doctorate. Who was silly enough to hire you for a real job.

We don't list undergrad or grad school bids or the places that we turned down for a job offer.

So don't list grants you didn't take either.

]]>
http://drugmonkey.scientopia.org/2015/03/26/grants-won-vs-grants-awarded-on-your-cv/feed/ 0
Rock Talk Age Data: Effective Indirect Cost Rates 1998-2014 http://datahound.scientopia.org/2015/03/26/rock-talk-age-data-effective-indirect-cost-rates-1998-2014/ http://datahound.scientopia.org/2015/03/26/rock-talk-age-data-effective-indirect-cost-rates-1998-2014/#comments Thu, 26 Mar 2015 20:00:53 +0000 http://datahound.scientopia.org/?p=901 A recent post on Rock Talk presented data on the amount of funding as a function of PI age group. These data were not presented in a terribly informative way but a file was available for downloading and Michael Hendricks normalized the data by the number of PIs in age group to reveal more interesting trends, discussed at Drugmonkey.

The downloadable data includes a breakdown of Direct and Total Costs. I have been looking for such data over a longer period than the last couple of year and thought I would take a look. Below is a plot of the Effective Indirect Cost Rate ((Total Costs-Direct Costs)/Direct Costs) for the overall data set.

Overall indirect graph

The Effective Rate drops from 44.2% to a low of 37.2% in 2012 before rising slightly over the past two years. These values are all somewhat lower than I anticipated based on my previous analysis on R01s.

To try to gain some insight, I looked at these data as a function of PI age group.

Indirect cost graph

The differences between the age groups quite substantial and surprising. For the lowest three PI age groups, the Effective Rate is relatively constant around 47%, consistent with my previous R01 indirect cost analysis. For the older PI age groups, the Effective Rate falls steadily from 1998 to 2012, reaching rates as low as 27.5% for PIs 61-65 in 2012.

I certainly do not understand what underlies these trends, but differences in mechanisms could certainly be involved. It may be that mechanisms as as U01s for larger efforts could be important. As always, it would be best to see data broken down by mechanism to facilitate accurate interpretation.

Any other thoughts on these data are most welcome.

]]>
http://datahound.scientopia.org/2015/03/26/rock-talk-age-data-effective-indirect-cost-rates-1998-2014/feed/ 0
Do try to keep up http://drugmonkey.scientopia.org/2015/03/26/do-try-to-keep-up/ http://drugmonkey.scientopia.org/2015/03/26/do-try-to-keep-up/#comments Thu, 26 Mar 2015 18:39:55 +0000 http://drugmonkey.scientopia.org/?p=7737 I hope you all have read through the Bridges to Independence (2005) report. Yes? It's freely downloadable and told us a lot about the state of NIH extramural funding, age cohorts and demographic disparities....a DECADE ago.

So when Rockey posts abbreviated data sets.....yeah.

]]>
http://drugmonkey.scientopia.org/2015/03/26/do-try-to-keep-up/feed/ 0
Is NSF's postdoc mentoring plan actually doing anything? http://proflikesubstance.scientopia.org/2015/03/26/is-nsfs-postdoc-mentoring-plan-actually-doing-anything/ http://proflikesubstance.scientopia.org/2015/03/26/is-nsfs-postdoc-mentoring-plan-actually-doing-anything/#comments Thu, 26 Mar 2015 14:19:34 +0000 http://proflikesubstance.scientopia.org/?p=4009 NSF first introduced the Postdoc Mentoring Plan as a supplementary document a few years ago. At the time everyone was all:


LOOK AT ME TYPING A "PLAN"! YES A PLAN!

There was basically no information on what we should be writing and panels had no idea what they should be expecting. It was basically a free-for-all and plans ranged from "Trust me, I do this" to two pages that made it sound like the postdoc would be working 6 jobs at once. In the years since, things have stabilized and there's numerous examples out there, providing guidance to people putting their plan together.

But has it DONE anything? Are NSF postdocs mentored better today than 5 years ago? How would we even know?

Ok, so I'll go on record that I am totally behind the idea and philosophy behind the postdoc mentoring plan. I get it, and I honestly want to put my postdocs in the best place to succeed with what they want to do as a career (which may not be a TT position). I think it's valuable for PIs to think about the training environment they are providing and what alternatives there are.

Do I think the PDMP achieves those goals? Probably not.

Why? Because I think the people who take it seriously are those who take postdoc training seriously in the first place. I think it's easy to toss words on a page that sound great without ever doing a damn thing about it. Most of all, NSF funding being what it is, it is RARE for a postdoc to be present when they mentoring plan is put together. Nearly every PDMP plan I see is either "postdoc TBD" or "potential postdoc X". Having an in-house postdoc who is funded and will transition to the new grant is just hard to do, given the grant cycle and budget limitations of NSF. All that is to say that most postdocs are likely to never even see the mentoring plan submitted for the grant they are paid by.

And what does it matter anyway? There is no possible way I can imagine that NSF could enforce any of it. Unless a PI puts specific assessment goals (useless if you don't have a PDF in-house already) or commits money to some sort of external training, there's no way for NSF to evaluate whether you are doing anything you said you would. It's entirely on faith that merely making you think about it was enough to affect change.

And finally, how would we even know whether this is effective? There is no way to assess the difference in postdoc mentoring without infinite variables. The PDMP is like an untestable hypothesis and we're being told to go along because it probably does something. Maybe.

Again, in a vacuum I think it's a good idea. But supp docs in these proposals continue to multiple faster than deanlet positions. I recently submitted a proposal that required 4 supp docs, at two pages each. That's another half a proposal, if you're counting at home. And with the new Nagoya Protocol going into effect, you can bet anyone collecting samples outside the US on NSF money is about to have some new paperwork. The supp docs continue to multiply, so I don't think it's a terrible thing to ask whether or not those documents are achieving their goal.

In the case of the PDMP, there's no way to answer that. And so we just write them so we can hold it up and say we did something. And that, my friends, is the definition of make-work paperwork.

]]>
http://proflikesubstance.scientopia.org/2015/03/26/is-nsfs-postdoc-mentoring-plan-actually-doing-anything/feed/ 0