One of the obvious desires and needs of the newly minted Assistant Professor is to rapidly establish his or her independent laboratory focus. To show the world, in both formal and informal ways, that all that brilliant work has indeed been driven forward by this new Principal Investigator.
As part of this it is necessary to take full credit for the work that has been done primarily by this young person's laboratory. It can be acceptable in some situations to take a bit of extra credit by inference when the work has been of a collaborative nature, particularly when only that Assistant Professor is under review.
It is dangerous, however, to fail to modulate these claims of credit for collaborative work when all of the participants in the collaboration are under simultaneous review. On the tactical level, you do not want your reviewers thinking that two, three or more labs are taking credit for the exact same thing. On a strategic level, you ARE going to piss off your collaborators. And this is the sort of thing that induces collaborators to stop collaborating with you and just to do it themselves.
When you are the more-junior partner in this scenario, the odds predict that the more-senior person is going to have more relative ability (funds and personnel) to cut you off and continue by other means.
As a related issue, one of the skillsets you need to develop as a scientist is a decent Spidey-sense for collaborators. Some are going to be selfish and some are going to bend over backward to let you take credit, to help your career along and to promote you. These latter are ESSENTIAL to your success. The former must often be tolerated and you do well to protect yourself from them. However, if you cannot discern the two different types relatively rapidly and act accordingly, you run the risk of really pissing off people* who would otherwise be your champion.
Don't do this.
*Remember that unless the person has "Emeritus" after their title, bending over backward to allow you to take credit is not necessarily immaterial to them. This is a reality. No matter how seemingly established a more-senior colleague is, they are worried about the future. There is always the next grant review. Doing a colleague a solid costs them something. The fact that they think this is the right thing to do, regardless, doesn't mean that they do not do so with a conscious nod to the costs involved.
There seems to be a sub population of people who like to do research on the practice of research. Bjoern Brembs had a recent post on a paper showing that the slowdown in publication associated with having to resubmit to another journal after rejection cost a paper citations.
Citations of a specific paper are generally thought of as a decent measure of impact, particularly if you can relate it to a subfield size.
Citations to a paper come in various qualities, however, ranging from totally incorrect (the paper has no conceivable connection to the point for which it is cited) to the motivational (paper has a highly significant role in the entire purpose of the citing work).
I speculate that a large bulk of citations are to one, or perhaps two, sub experiments. Essentially a per-Figure citation.
If this is the case, then citations roughly scale with how big and diverse the offerings in a given paper are.
On the other side, fans of "complete story" arguments for high impact journal acceptances are suggesting that the bulk of citations are to this "story" rather than for the individual experiments.
I'd like to see some analysis of the type of citations won by papers. All the way across the foodchain, from dump journals to CNS.
We must tread lightly when equating what represents enough work for a publication to either dollars or hours spent.
But if the standard for reasonable productivity under a grant award (such as the R01) is, say, 6+ papers, and reviewers and editors think a single pedestrian paper should contain most of what is proposed in that entire award, then someone is not playing with a full deck.