The Christmas season seems to be bringing up a lot of talk about e-books, journal costs (namely increases), and the role of the library in the digital age. Is it because the Kindle and Nook are popular gift wish list items? It it because some library vendors are pushing bills a little later to the end of the year? I don't know.
Robert Darnton is the Carl H. Pforzheimer University Professor and Director of the Harvard University Library. In his recent article in the New York Review of Books, The Library: 3 Jeremiads, Darnton explains many of the things we at BOT have been mentioning and discussing for some time. (Note: Darnton is an historian, not a scientist. Expect verbiage.) I confess I had to look up what jeremiad meant, as I've studied very little theology. Darnton's choice of words is interesting. The OED defines a jeremiad as "A lamentation; a writing or speech in a strain of grief or distress; a doleful complaint; a complaining tirade; a lugubrious effusion."
Once you wade past the Harvard promotional information, Darnton does a thorough job explaining the three main reasons why libraries are in such a a bad place right now. Just like in the most recent BOT post (which began as a comment) one of the main themes is control. Who acquires, maintains, and relinquishes control of information is important, as Libraries routinely give up control of many aspects of what they do for the greater good of society and culture.
What is Darnton's solution? Creating another library resource, the Digital Public Library of America (DPLA). Per his description, it is "a digital library composed of virtually all the books in our greatest research libraries available free of charge to the entire citizenry, in fact, to everyone in the world." I see this as a library-centric way to regain control of information, and wrest the monopoly of Google, et al. on the digital library of the future.
What will the DPLA hold? Of particular note is Darnton's suggestion that: "the DPLA would exclude books currently being marketed, but it would include millions of books that are out of print yet covered by copyright, especially those published between 1923 and 1964, a period when copyright coverage is most obscure, owing to the proliferation of “orphans”—books whose copyright holders have not been located." Recent attempts to create updated policies for use of orphan works in the US have been unsuccessful. Is this a way to secure mechanisms to use them without contacting copyright holders? This is very interesting, since the GBS has includes orphan works in its scanning program and has been taken to task for its' own interpretation of copyright.
This is interesting, because one by-product of the Google Books Project, Hathitrust, is collecting GBS content as a library-side answer to Google's monopoly of the content. Hathitrust is also starting to build a governance structure, which has more libraries joining to secure a seat at the table. So GBS content will have another outlet controlled by libraries, at least for the forseeable future. Presumably Hathitrust will have most of the DPLA content, since the largest academic libraries have already signed onto the GBS project and their works are being scanned as I write this sentence. This also begs another interesting question: what if these GBS books are not the most important books in our culture? What if big parts of these large academic collections are filler, or less relevant for research?
Likewise Portico and LOCKSS have created a framework to preserve most of the commercially owned journal content. While Portico isn't owned and controlled by libraries (it's owned by Ithaka, a non-profit) , LOCKSS is run by the Stanford University Libraries. The only problem, of course, is that apart from individual libraries' efforts to make information available, most of this content is still controlled to some extent through toll access and/or trigger events from the archives. Open Access journal articles, and subject repositories will have content available to the public for free, so some content is already available this way.
The last remaining segments of research are local and special collections individually housed by academic libraries, archives, historical societies, and museums. Many of them are digitizing projects and collections as they are able to, and most are putting this information in repositories. Would this information be part of the DPLA? Maybe, although it's not clear from Darnton's article if this is the intention. For many scholars, this primary material is the most essential for their research, not synthesized monographs and summaries. Increasingly, for scientists access to research data and supplemental material is what is desired, not necessarily the final published article or book on a subject.
I think Darnton has missed the point with the DPLA, and it looks to me like duplicative work being promoted to an audience unaware of the environment surrounding digital content and access to information. So I offer a jeremiad of my own: The Library community needs to think more broadly and create broader pathways to content, rather than trying to create more specialized channels to information.
The concept of a national library seems outdated to me in light of today's digital environment. I frequently meet and communicate with researchers from all over the world using social networking tools and applications.Digital information doesn't have national boundaries - why create them in a library? It seems more time should be spent looking at how to create an international digital library, or repository, or link existing data and research sources rather than creating segmented units of information created for specialized audiences. There is a rapidly growing collection of digitial data, research material, and communications, all of which will be of tremendous importance to the next generation of researchers. Who will preserve this? How will it be preserved? This is what a DPLA should be thinking of, not items from 1923-1964 that will likely be saved through other scanning programs or as a print copy.