I first noticed this in chemistry but now I'm seeing it in engineering, too. Major publishers (?), content vendors, indexes (?) are starting to offer services whereby your company can use their search tool to index your local content and display it side by side with their content OR a way to use their api to pull their content in to your local tool.
That's a common complaint in research labs and in companies. Knowledge management of internal information gets funded and defunded and is cool and then not cool... External information is familiar to people coming out of school... how can you search across both.
We have the artist formerly known as Vivissimo (now IBM something spherical I think) as an intranet search, and I would love to see it use our discovery layer type search as a tab. I don't see why it couldn't.
This deserves analysis and thought - no time. sorry!
Everyone is talking about my favorite subject - it's hard to keep up! I've been reading a lot of it on my mobile device while feeding thing 1 and thing 2 but that makes it hard to blog.
I hope to do some round-up posts and some commentary around:
- journal impact factor
- publishers finally defending their practices and responding to calls for open access, pressures from PLOSone and similar journals, etc
- economic and business publications covering scholarly comms
- what AIP publishing's ending of service to non-member societies means (it means lots of very good journals finally getting to functional platforms, among other things)
Probably some other stuff, too.
It's really healthy to have these discussions and I'm just disappointed that I don't have the time to follow them as closely as I would like. I might actually resort to a link dump if all else fails...
Here's a blog post that needs to be written: Ed Yong mentioned in his session earlier today (scio11, 1/15, 2-3) writing about an article from nature on chicken and getting an e-mail from a chicken farmer. This goes back to the multiple kinds of expertise and the knowledge that practitioners have. The example from the literature is sheep farmers, but same thing.
My point is that there should be a way researchers and these expert non-scientists can collaborate on a blog or in social media to increase mutual understanding, help the research, and help the practice.
Is it possible or even desirable to have one search interface that serves every need?
I have about 10 minutes to write this placeholder of a post. Hopefully, I'll get the opportunity to revisit this topic near and dear to my heart later.
I've often railed against naive librarians and administrators who insist we need "google boxes" as our only interface for every system, for every need, regardless of what is behind the box. In fact, we just
fought this battle had this discussion with our enterprise search consultants, but anyhoo.
This particular post was prompted by Martin Fenner's discussion of the new PubMed redesign. He rightfully (IMHO) points out that this one interface is supposed to serve clinical medicine, research in the life sciences, librarians supporting those two, journalists, parents of sick children, etc. It's also supposed to help find journal names, researchers, exact citations, genes, proteins (to be honest, I don't know what all of those choices on the dropdown do - in fact, I don't know what most of them do). He says that he thinks its leaning more towards life sciences research and away from clinical medicine now.
Pubmed is actually a great example. If it were possible to have a single interface, and NCBI had provided it, then the proliferating other tools that also search the data would not exist. Clearly, you need different information if you are in those different groups. You also need a different interface - and by interface I mean support in query formulation, results pages, and help adjusting your search - for the things besides journal articles by topic.
But if you go to the trouble to design different interfaces, how to you funnel people into them? Based on their query? Have them select (as if they will!)? As a way to narrow?
So anyone who's spent any time at all with Google Books (hence forth GB), has probably noted some really bizarre - I mean truly strange - metadata. Like messed up titles, authors, publication years, oh and categories are totally hit or miss. I frequently take for granted that everyone has seen all of the memes that go around in the library web 2.0 circles. But that's crazy of course. So I'll just throw this at you scattershot.
At a meeting at the Berkeley iSchool on the GB settlement (and that's another thing I should blog about but don't have time for the research needed), linguist Geoff Nunberg tore up GB for this. See his blog post and then the pdf of his slides.
Google responded. One of the arguments is that their data providers gave them crap - or at least conflicting ok stuff. Heh. If you mashed up every US and UK academic library catalog you would still have better metadata than they have and they only had to pick one library (the originator) for each scan and then map the LCSH to the BISAC. Seriously. Like certain fields would be weird, but we've had machine readable standardized records for decades and decent cataloging for decades before that. And they have the whole load from our massive union catalog, WorldCat.
I mean, that doesn't stop me from using it, but I'm just using for natural language full text searching and linking out from my library's catalog, which is cool. linguists apparently thought they could rely on the metadata when using it as a corpus for analysis.
I have great post titles and topics in my head, but much less time to blog! (quick review: I work 40-50 hours/week, sometimes work 4 hours on Sunday afternoons, am a doctoral student preparing for comprehensive exams in mid-July, am married with a house, a old cat (in kidney failure - which creates clean up issues), and a dog.... well, you get the idea)
So, I'm going to post blog post titles from time to time, and allow the reader to fill in the responses for themselves
When is it a valid test of the scholarly communication system to perpetrate a hoax, and when is it a party foul? If it is a valid test of peer review, then what inferences can you legitimately make, given success? Research ethics deal with balancing the potential harm to the participants with the good for society as a whole (among other things). Is it ethical (or legitimate) if you pick on a poorly-regarded publisher, and use that to impune the reputation of well-regarded publishers? Does the harm out weigh the scientific merit?