Archive for the 'information retrieval' category

Looking at ROI or at least financial justification for SDI or alerting services

Dec 30 2013 Published by under information retrieval, libraries

Used to be that special libraries were the ones always asked to show their return on investment and justify budget expenditures - it was obvious that universities needed libraries and they were judged on the number of volumes (like that was ever a sensible metric for the value of a school!). In the past decade or so public libraries have been under more pressure to show ROI and they do so by showing economic benefits to the community from having library services in an area (there are also many more dimensions used in a nuanced way - see iPac's work). There's a nice (if a tad dated) review by Cory Lown and Hilary Davis here: http://www.inthelibrarywiththeleadpipe.org/2009/are-you-worth-it-what-return-on-investment/.

The term SDI - selective dissemination of information - was coined in the 60's but no doubt librarians have always performed this function. Whether formally or informally, we are asked to keep a look out for things of interest to our customers/users/patrons, etc., and bring them to their attention. Formally, we might have a budget we charge our time or even resources to and we do a detailed reference interview in which we establish the topic but also the type of information desired, the treatment, time period, frequency, and some gauge of whether the person would rather have some junk but be less likely to miss something (high recall) or is ok with being more likely to miss something but wants only things that are likely to be very relevant (high precision).

With this information the librarian might just set up a search in a database, tweak it a bit, and then walk away. She might have the results come to her and weed them every week. Alternatively, she might actually schedule some time to read the news and look out for things and then write up summaries.  Informally, it might be just that a librarian knows a customer's interests and if/when she sees something, she forwards it.

Once a database alert has been set up and is running, further intervention is only needed if the database vendor changes or if there's a request. The problem with this is that the end customer can (and often will) forget how they came to get this useful little email every week. We found when we needed to clean up our Dialog account that there were alerts from the former librarian who died in maybe 2002 or so (before I got here). They were super useful to the users and they passed them around within their group, but we were able to re-write them using our current site license to the database and save that money. If there wasn't a bill, we wouldn't have known and certainly those engineers had forgotten.

So what if one of those alerts had a gem in it that the recipient wouldn't have heard about otherwise and that caused them to start a new research program or innovate on a project or save money or .... ? Would the library, or more importantly, the people who pay for the library ever hear about it? No.

For the informal mode in which we keep an eye out for customers. That can be really hit or miss. Sometimes there's all kinds of interesting things going on and other times there's nothing. Maybe we point out 100 things of interest for 1 home run. Maybe allowing ourselves the time to look - to read the news, the industry newsletters, the science magazines (like society member magazines like Physics Today, EOS, etc) isn't do-able. That's a huge problem. It looks like you're doing nothing but fooling around on the internet. When you do send something good, they might be like "great - send one this good every week!" or "more like this!"

We were going to start up sector newsletters here, but it's really not sustainable because you have to look around and read for a while to see new and interesting things worth alerting people on. Sure, it's super useful but how many hours go into each home run? The bosses very much appreciate these tips they get, but they do not want to pay for the time for people to look for the information.

My old boss used to say that we needed to be just-in-time not just-in-case and that's total baloney. Libraries by definition are just-in-case. These alerting services are just-in-case. Metrics like number of alerts sent out are not useful. Stories of times these alerts were truly useful and used are great - but you have to hear them and record them.

My library has lost some big battles in justifying our existence so I am clearly not that effective at this. It's a sticky question, I think. My blog posts always peter off like Crichton novels, but oh well. Happy New Year - hopefully we'll still be providing library services in the new year after we're re-organized again, sigh.

 

One response so far

How not to support advanced users

Oct 29 2011 Published by under information retrieval, interfaces

At first I wasn’t going to name names, but it seems like this won’t make sense unless I do.

Over the years Cambridge Scientific Abstracts became CSA and then now is just part of ProQuest. The old peachy tan-colored interface always supported advanced searching. When the tabbed olive colored interface came out a few years ago, some of the advanced search features were a little buried, but you could still find them (I blogged about it then, but was corrected by someone who showed me where they were). The databases I’ve always used on CSA are very specialized. I use Aerospace and High Technology the most, but I also use Oceanic Abstracts and Meteorological and Geoastrophysical Abstracts. For my own work, I also use LISA.

I find that for topics like missile design, including hypersonics and propellant formulations, and spacecraft design, Aerospace and High Technology does much better than the general databases like Compendex. Oceanic abstracts is a great complement to GEOBASE (and GeoRef, but meh) on other topics I research.

I have search alerts set up in these various databases. Some I review and forward to my customers whereas others I keep for my own use. The alerts take advantage of the advanced searching available and are tweaked over time to be as precise as possible.

So now that we’re all moving to the new ProQuest interface, it was time to translate my searches to the new format. Luckily, ProQuest has a help page that takes you from the searches in the old interface to the new. I have to say, though, that there are pieces missing. I found in Illumina (the olive colored interface), I could just use kw to get the primary fields out of the record and leave off the references. In the new interface, I had to list all of the fields individually. Also, I had a real problem nesting all of the searches I needed to do. Long story short, I did manage to figure out some satisfactory searches for the alerts.

Now, here’s what actually prompted me to write this post. I am an advanced user and I do have a lot of experience with different interfaces. When I do find a problem in the interface, I’ll report it – particularly if it’s keeping me from performing some task.

In the new interface, if you have something more than the basic search, it often will not let you see the last few pages of results.

For example, in Aerospace (the name now leaves off high tech, let’s hope it still covers the same content):

propellant friction sensitivity – is just fine and you can see all the results

propellant AND “friction sensitivity” – either done through the basic search screen or done through the advanced search, will not let you see the third page. It gives an error.

Fine, so I reported this to their help desk. They replied a week later and we’ve been exchanging e-mails ever since. They’ve assumed I was technologically inept, that my computer was broken, that my library had set up something wrong with the database, that our network was messed up, and that we had a proxy server causing errors. I sent them the error messages from the screen. I sent them screenshots. I tried the same search on three browsers and got another librarian to try from her computer. We could all replicate the problem. They said they visited my library’s web page and couldn’t find a link to the database. Well, *my library* doesn’t have an external web presence – at all! Further, I had already given them the direct URL and told them at least three times that I wasn’t going through a proxy server because I was on campus.  They wanted a screenshot of the search screen (?!?) so I sent that.

Yesterday morning, I got another e-mail. Upon further investigation, they found that this was… a known error… and that technical services was working to fix it. The work around is to re-sort the records until I had seen them all.

Do they have any idea how mad that makes me? How much time I spent proving I was seeing what they already knew was happening?  Did they even check their knowledge base or did they decide to screw with me for three weeks before even checking?

I’ve had it, but damn it, I need that stinking database for my work and there’s no other real option. GRRR.

Is this how to treat your advanced users?  The first search string I sent them should have clued them in (it’s not the one above, it’s much longer). Plus, they asked and I told them I was a librarian when I submitted the report.

3 responses so far

Research Database Vendors Should Know

Research database vendors - the ones who design the interfaces that the end users use - should know that data export is not a trivial addition. Rather it is an essential part of their product.

Over and over and over again, librarians complain about one interface that works one day and doesn't work the next. The one that doesn't output the DOI unless you select complete format. The one that all the sudden stopped exporting the journal name. The interfaces that don't work with any known citation manager. The ones that download a text file with 3 random fields instead of direct exporting the full citation and abstract.

But you blow us off and you act like it's not important.

Well. I was just talking to a faculty member at another institution - even though a particular database is most appropriate for her research area and she finds interesting papers there, she now refuses to use it because it doesn't export to EndNote right. She's tired of the frustration and she is tired of finding that she has to edit everything she's imported so she's just given it up.

Once again librarians are telling you something and you need to listen. Researchers and faculty are super busy. They will not keep using your product if it makes their life harder. If they don't use your product then we'll stop subscribing. That's all there is to it.

2 responses so far

Lamenting the poor support of the expert user

Apr 01 2011 Published by under information retrieval

For some project I’m on at work, I had cause to look up “flow” and how to basically support/encourage that in information systems. Ben Bederson’s 2004 piece in Ubiquity seems to be one of the standard articles.* Being in the flow is

When we are fully engaged and in control of an activity, we sometimes sense that time passes more quickly and we feel immersed in that activity to the exclusion of all else.

Bederson emphasizes these characteristics of flow from Csikszentmihalyi:

  • Challenge and require skill
  • Concentrate and avoid interruption
  • Maintain control
  • Speed and feedback
  • Transformation of time

Flow doesn’t happen when it’s all easy and floating past you, it’s when you’re engaged and challenged. Bederson talks about Emacs and Adobe Photoshop- they’re difficult for newbies to learn, but once you’re an expert you can immerse yourself in your work and not interrupt your flow.

Taking this to the obvious next step (for me), how can/do/should interfaces support scientists, scholars, and librarians in the flow of literature-based discovery/analysis and writing?

I guess scientists think more of this flow when they’re doing the bench work or maybe designing a study (or do they?)… but there’s no reason that you can’t get into the flow when you’re searching and reading the literature or writing. I’ve had it happen where I’m reading an article and it gives me a million research ideas and it is really great and really useful. What about the actual search process?

Let’s look at those features with the current systems:

Challenge and require skill. For the most part current information retrieval systems advertise that they require no skill and that they aren’t challenging at all. Some that do pose a challenge are more like frustrating instead of powerful. Libraries and others who do a lot of online searching know that there is skill involved in getting the best results, even from the most simple Google interface.

Concentrate and avoid interruption. For the most part, this seems out of the system. Few of the information retrieval systems popup distracting windows. It would be nice, however, if systems didn’t time you out. That’s miserable: you go to follow a lead and look at an article, but when you come back you need to click to start a new session.

Maintain control. Well, this is something that few systems now really emphasize. You can do a fielded search on many of the different systems, but they don’t all have proximity operators or really advanced functions. Some do, but it’s hidden (if you’re an advanced searcher you’ll find it, but still). They also seem to hide the controlled vocabulary – if they have one. Some systems have added automatic stemming (generally a good thing) or even some mild automatic query expansion, but they have to let you turn this off. It’s easy to turn off in EngineeringVillage2, but I’m not sure about other tools.

Speed and feedback. Most are pretty fast – except for Leadership Directories (why on earth is that thing sooo sloooow?). Feedback varies. Certainly faceted presentation of search results is a great addition that really provides a ton of feedback of how your query was interpreted in the system. Spelling suggestions from EbscoHost have been hilarious (did you mean caber warfare? um no)  - that’s feedback, but meh.

Transformation of time. The one system that has all the power and control, is a challenge and requires skill, doesn’t let me forget the time at all! Dialog classic, I’m lookin’ at you! What about other systems? When I’m on the hunt and I’m finding great stuff I can get pretty immersed – I have to remember to write down my steps so I don’t wander too far from my path.

 

In blog posts about the design of a new overlay for the library catalog for the larger institution of which MPOW is a division, Jonathan Rochkind talks about design of system he believes provides the magical combination of utility for the newbie or casual searcher as well as powerful features for the advanced searcher. Don’t get me wrong, I’m not saying the library catalog could ever really be an immersive place, but it is heartening that the team paid such careful attention to what the users said more than the tripe we’re fed about the simplistic search box.

I think the system that will do the best for the advanced searcher will offer the power and precision of an expert search- not just dropdown fields, but the ability to use au= AND ti= AND su= ( OR ), etc., within a box. Stemming is fine but you need to be able to turn it off. It’s fine to expand on a term (like a system that OR’s on the alternate spellings of Qadaffi would be most useful) but you need to be able to turn that on and off within a search, term by term ( used a system that literally used metaphone for every term in the query and *you couldn’t turn it off*). I think the underlying data has to be very high quality: accurate citations, a tight thesaurus that is consistently applied, coverage of the appropriate resources… while I’m dreaming, authority control on the author names.  There has to be support for using the subject indexing (locating the right term). Once the search has been retrieved, there should be some visualization and analysis tools. Faceted presentation of search results is a start, and visualization of the citation network is good, but there could be some other analysis tools. Jumping off from linked authors or subject terms or citations is useful, too.

Will all this lead to flow? Not sure. It’s still very separate from the writing process and the synthesis of information. A project at MPOW is looking at how to integrate search into the report writing process (hints of what they did in this paper, pdf). I’m not convinced this is going to work but it’s worth some thought. Others have integrated sense making with information retrieval, not sure how this goes with flow either.

The perfect system (of course varies by user, by task, etc.) may very well integrate the query formulation (reference interview-like support), retrieval, sense making, writing, fact checking, visualization….

Back to the title of the piece. The primary users of research databases in the science really aren’t the 18 year old undergrads. Why do we keep hearing from proud database vendors that their products are optimized for such users? As in the flow article, as well as some of my favorite articles by Soergel and Bates (individually – no co-authorships), it’s ok to require the user to learn, to adapt to the system. Don’t feed me a line about the benefit of no information on the screen and an empty box while you take away the features that work for me, one of your heaviest users.

 

* clearly there was a ton of stuff before this – enough to be summarized in a popular book by Csikszentmihalyi , but this article pops up when you apply the idea to interfaces

Reference

Bederson, B.B. (2004) Interfaces for staying in the flow. Ubiquity 5, 27. DOI: 10.1145/1074068.1074069 (goofy citation, but this is all I can get from the ACM DL and his website)

2 responses so far

Google Recipes: kinda cool

Feb 24 2011 Published by under information retrieval

Recipes really lend themselves to a fielded search and a faceted presentation, but most sites don't do it too well. I love Cooks Illustrated, but their search has been quite awful. Epicurious is much better, but with all that great structured data, you would think they could rock it.

Google has come out with a recipe search that takes advantage of the markup some sites have added. You can narrow by prep time, calories and often ingredients.

I'll have to try it some more with some real searches. Seems pretty cool so far. No need to memorize any pages, you can just do a normal google search and then limit to recipes afterward.

No responses yet

Discovery layers a little way on

One of the reasons Google Scholar is so attractive is that it covers every area of research. Another is that it’s quick. Another is that it’s free. But, it doesn’t necessarily go that far back, it’s unclear exactly what it does cover, it lags publication (and we don’t know how long), and it doesn’t support very sophisticated searches. Plus there’s no controlled vocabulary behind it so you don’t get all the results that are relevant. Of course no substructure searching :)

Library tools such as catalogs and research databases have other sets of problems. The research databases that have the powerful search tools and are really well indexed with a good controlled vocabulary tend to cover a fairly narrow research area: physics, medicine, chemistry. Other tools that cover a broad group of topics probably don’t have as good indexing as GoogleScholar, but offer full text. Catalogs are typically miserable to search. It’s very hard to find what you’re looking for in most catalogs, particularly if you’re doing a subject search and not looking for a known item.

The narrow bit is a particular problem in the newer disciplinary areas or interdisciplinary areas. That’s one of the reasons libraries started licensing federated search products maybe like 10 years ago? I should probably explain again, although I’m fairly certain I must have before. A federated search takes your query translates it for a set of pre-selected database, sends it out, and compiles a list of results. Compare this to something like google that has already gone around to all the websites, crawled them, stored the results, and then created optimized indexes and what not. So you begin to see that federated searches seem really slow. Plus the search language ends up being lowest common denominator with only a limited number of fields. What’s worse is that out of the box the results pages weren’t very well done (there’s an add on that we have that improves this a lot).

So, a few vendors managed to negotiate a deal with the databases to – yes – index them in advance, and show the results in a slick interface. These things are being called discovery layers. You would only get the results of the databases you pay for from the original vendor. (well, that brings up a question – for something like, say, Inspec, we pay the database producer fee plus a markup for the interface provider… I wonder if you just pay the first part? dunno). Anyhow, you get the speed of something that is indexed in advance and the benefits of having the underlying databases. Typically they’ll suck up your catalog and institutional repository, too.

Your reaction is probably like mine was: how do you get all of the underlying databases to sell to you? Without them, it falls apart.

So that brings us up to the current part of the story. I’ve mentioned how Ebsco is really on a power grabbing mission. They own a bunch of databases. They are also developing one of these discovery layer deals. Well one of the discovery layer deals wrote a letter to all of their customers saying Ebsco had pulled information out. Iris has all the information on her post – so I won’t repeat it, but apparently none of the discovery layers are providing information to any of the others. That leaves us with crawl and cache for one big vendor and federating the competition’s, no matter which vendor we pick.

Other things called discovery layers are actually just overlays for the catalog. That’s what ours is going to be. That nibbles away at one part of the problem but really doesn’t approach the elephant in the room.

Sigh, a bit depressing, but now that I ponder the whole thing, I’m not sure how much of a loss. There are lots of research efforts about dealing with multiple ontologies for dealing with scientific data coming from multiple sources. Maybe we can figure out something better like something that really uses the controlled vocabulary.

3 responses so far

Research database data export kvetch

I'm sure I must have bitched about this before, but argh!  I don't understand, when you have nice structured data all in clean little fields, how you could so horribly and repeatedly screw up exporting to citation managers. The worst part is, even after a database has it right, they'll often screw it up when nothing else has changed.

I do A LOT of searching of research databases. Like I probably spend a quarter to three quarters of every day at work searching in some research database or another.  As I mentioned in my post on packaging results, I'll typically export results from the various places to RefWorks and then use that to compile. I'll then export from RefWorks to APA annotated with abstract for my report. I also maintain a listing of articles written by MPOW and I export from research databases to RefWorks to populate that.

After I've updated the listing of articles, I'm usually so frustrated I can barely see straight.

Here's a list of things that make me most angry:

  • the DOI field not being exported
  • a period being added to the end of the DOI field  (WTF?!?)
  • http://dx.doi.org/ being added to the front of the DOI in the DOI field
  • the DOI being routed to the Links field with the http://dx.doi.org/ in front of it
  • NTIS database results coming out as Journal Articles - it's NTIS, all of the entries are by definition technical reports even if the content was presented at a meeting or whatever
  • NTIS database results not transfering the report number
  • Conferences seem to never come in right from anywhere

Yes, I do global edits to fix all of these but the period but that just adds on more time.

I used to recommend EngineeringVillage2 for good data export - but they've "fixed" it so now it does a few of these things. My current best data source is Web of Science - now that we've got an export to RefWorks button. It's the *cleanest* data export you'll find.  People  complain about it for analytics or JIF purposes - but really, take a look at the competition!

2 responses so far

I'm thankful for... citation linkers!

Nov 22 2010 Published by under information retrieval

So there are these great research databases, like PubMed, that basically tell you, "A solution exists!"  Or really, there exists an article described with these metadata* that might answer your question.  So how do you get from knowing that an article exists to having it open on your screen?

Otherwise, if you're looking at said article, and it's got a fascinating reference to another article... how do you go from knowing that article exists to having it on your screen?

We at fancy research institutions have this awesome tool that goes from there exists an article to having it on your screen. There are 3-4 major vendors of this kind of product. Ours is SFX from Ex Libris** and we've branded it Findit! We have tons of e-journals, books, conference papers, etc. So we load into this database our holdings, including what year and all. For the big deals, I believe we can just pick off the right package. Then, when you're in a research database you click on the happy little FindIt button, and it does!  If you're in PubMed, and you have the libx toolbar, you can click on the PubMed ID.  Or, you can go to the FindIt page, and copy the PMID into there or the citation into there. SHAZAM!  Even if our access is through an aggregator, ADS, or JSTOR, you can get to it from there.

FindIt - an example of a citation linker

An example of a fabulous citation linker landing page

AND, if that isn't enough to be thankful for, ours is blinged up with the Xerxes add on so it calls APIs from Scopus and WoS - you can see right from there if the article has been cited and click through to the citing articles. You can also see if it's on the shelf in the library (important for book).***

Are these perfect? No, far from it... but they are *so* much better than not having them. Thank you!

*data about data. Like author, title, journal title, year, volume, page, doi, ISSN, MeSH or keywords...

** not affiliated/not an endorsement.

*** for more details on Xerxes Umlaut, see Jonathan Rochkind's blog or this from code4lib wiki

2 responses so far

A federated chemical information search?

There’s been an ongoing thread on the ChmInf-List – it started discussing Reaxys (what used to be Beilstein, Gmelin, and a property database, from Elsevier, sold as a site license, I think without limits on concurrent users) and whether it was worth it given searches of Beilstein on Crossfire Commander were dropping.It then morphed into a discussion of Scifinder (the primary way to search Chemical Abstracts, the primary literature database in chemistry. Scholar is the academic version – licensed by concurrent users. Scifinder itself is typically licensed by selling packages of searches called tasks).  The thread ranged on to discussions of privacy (ceded with requiring individual registrations for the web version of scifinder) and licenses for the two, and then the fact that the license for Scifinder Scholar says that institutions can’t share usage information with each other, the fact that SciFinder is required to have your chemistry major accredited by the ACS…

Along the way, someone mentioned that there was a limited version of Chem Abstracts available to search on other services (DIALOG) – it doesn’t have the structures and maybe some other things. I don’t know how I got on this topic – but I then suggested that if the license would allow it, it would be really cool to design a federated search specifically to deal with chemical properties.

Now I’ve been as skeptical of federated search as the next librarian and I also am a big fan of sophisticated searches that take advantage of the power of the local interface. But I also know that it’s not about the thrill of the chase, it’s about solving a problem. The ideal system is the one that gives you the best solution, the quickest, and also teaches you and gives you confidence in the answer.

What would be excellent is if you could federate a search across:

  • SciFinder
  • Reaxys
  • ChemNetBase (includes CRC handbook and the Combined Chemical Dictionary and other things)
  • ASM Handbooks and Phase Diagrams (this would of course be the materials science ASM)
  • SpringerMaterials (includes Landolt-Boernstein)
  • ChemSpider
  • maybe the CSA Materials Databases
  • whatever other appropriate stuff like if you have Knovel or BioRad’s Know it All U or whatever

Just as MetaLib (our federated search provider) has a fairly crappy interface out of the box, and required a lot of fixing… even with that fixing you couldn’t just throw that same solution at this search. First, this is a dream anyway because hardly any of these chemistry sources can be federated. Second, even if they could be federated, would the fields be there or would we only have title, author, subject? What would you display in the results beyond the value (if possible) and the citation?

So anyway, I suggested this and a retired professor posted to the list essentially yelling at me and others on the list for daring to

  1. discuss Scifinder on an open list (he said it should be on scholartalk but HELLO, I don’t have access to scholar!)
  2. call SciFinder a database (well it is, so there)
  3. suggest that SciFinder could be federated

I’m amazed and thankful for the chemical information pioneers who made what we have today possible, but I am nowhere near content to leave it as such. By tying our hands about what we can discuss, they’re hurting themselves in the end, because a large innovative community should be a resource not a threat.

Enough for now – none of the comments on this particular idea came from any official representatives of CAS or Elsevier. CAS’ policies are clear in the licensing agreements.

4 responses so far

Let’s Help ScienceBlogging: What design features are useful in a science blog aggregator?

First, the great news: Bora, Dave, and Anton got together and developed a website to aggregate science blog postings. It’s at scienceblogging.org . This is really still at it’s first stages and they plan to continue to add to it and refine it as they go.

Here’s a screenshot (I’m guessing the page will look different over time so this way you can see it as I see it).

It's got three columns, and the top five stories from each source. The title links to the source as do the story titles. There’s also a blogroll – an alphabetically arranged list of the sources.

The sources are a combination of blog networks like this one, Discover’s, Nature’s, ScienceBlogs.com, etc., and some news feeds. Some of the sources are in other languages (Brazilian Portuguese, German, Chinese, French).

It’s clear from the design (and the delighted reactions) that this is meant as a place to go to read a diverse collection of science posts – to get a sampling. It doesn’t link to independent blogs – except when they are aggregated by “All-geo”. It also doesn’t have any way to export the contents or really explore the contents besides browsing the titles on the front page. If you mouse over the article titles you do get a snippet of information.

What features could help the current setup?

  • some way to expand and read a snippet without mousing over. People with twitchy hands might not do well with that
  • some indication of the blog name where the article comes from
  • separately, a page providing information about each source – I know some of these, but I’m assuming a lot of people don’t
  • an opml file or some way to export the rss feeds to your reader (you could, of course, visit the original site or just keep coming back)
  • I’m not sure what order things are on the page. Maybe they should be in some categories? Some explicit organization? (Blake makes that comment here)
  • Blake also makes the comment that these various aggregators have different rates, so 5 posts might stay there for a while or there might be 5 posts an hour – it’s hard to see how to deal with that.

Could independent blogs be added, and how?

This post from Dave puts forth some ideas for adding “science blogs”. The first problem is defining what’s a science blog. I faced this in both of my previous studies, and I solved it two different ways. One I was very strict: self-identified scientists posting mostly on scientific topics. The other I was more broad – the above plus scientists posting on life in science, plus everyone else blogging about science.

What no one mentions on that post is: what is science? Are social sciences included? Librarianship? Areas of the humanities like anthropology, archaeology, communication, history? It’s really hard. Science librarians yes, others no? Well, then we’d lose Dorothea. So academic librarians? Then I’d drop off :)

First, selection and maintenance

  • Nature Blogs takes nominations and then requires two members to confirm. They require:
      1. composed mostly of original material - no press releases or lists of links
      2. primarily concerned with scientific research
      3. updated (on average) at least once a fortnight
  • Other suggestions – like from Jonathan Eisen on twitter – were to take nominations and have a curator say yes or no. This could be way, way too overwhelming and there could easily be hurt feelings if someone didn’t get included and they thought they should.
  • A variation on that is to have one or a few committees. Maybe for each subject area.
  • Maintenance is also an issue – keep dead blogs? Use an automated link checker? Manually go back and check if the person is still blogging and still blogging about science? How often? Have a way for visitors to report. (Oh and for heaven’s sake, Nature won’t let me change my url from blogspot – let the bloggers update their urls).

I sort of think the Nature way pretty much works. It’s crowd sourced, so less load. But the maintenance stuff needs to be added.

Second, organization

  • There needs to be some organization scheme. It might go deeper (with sub categories) in areas where there are a lot of bloggers
  • The organization scheme could have a couple of different facets (topical/subject – chemistry, gender, work setting – industry)
  • Should be able to look at an aggregation on each subject category, and export rss feeds from that category
  • Some of the others aggregate around what journal article or molecule is being discussed – this might be too hard and there might not be enough content to do that.
  • There could be some organization around links. See who links to this blog, see who has commented on this blog – but that would also take a lot of work.

Personally, I’m not so much interested in links to press releases and main stream media – the bloggers pick up things like that that are interesting (I pick up some from the information industry). I’ve already spent way to long on this for incremental help to the founders – they have already done an amazing job. Maybe some information architect-y or user experience person might weigh in?

5 responses so far

Older posts »