New & Noteworthy

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 5 June 2009

345

Citation

(2009), "New & Noteworthy", Library Hi Tech News, Vol. 26 No. 5/6. https://doi.org/10.1108/lhtn.2009.23926eab.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2009, Emerald Group Publishing Limited


New & Noteworthy

Article Type: New & Noteworthy From: Library Hi Tech News, Volume 26, Issue 5/6.

GE Breakthrough Validates Technology to Enable 500-Gigabyte Disc

General Electric (GE) Global Research, the technology development arm of the General Electric Company, today announced a major breakthrough in the development of next generation optical storage technology. GE researchers have successfully demonstrated a threshold micro-holographic storage material that can support 500 gigabytes of storage capacity in a standard DVD-size disc. This is equal to the capacity of 20 single-layer Blu-ray discs, 100 DVDs or the hard drive for a large desktop computer.

GE's micro-holographic discs will be able to be read and recorded on systems very similar to a typical Blu-ray or DVD player. Holographic storage is different from today's optical storage formats like DVDs and Blu-ray discs. DVDs and Blu-ray discs store information only on the surface of the disc; holographic storage technology uses the entire volume of the disc material. Holograms, or three-dimensional patterns that represent bits of information, are written into the disc and can then be read out. Although GE's holographic storage technology represents a breakthrough in capacity, the hardware and formats are so similar to current optical storage technology that the micro-holographic players will enable consumers to play back their CDs, DVDs, and BDs.

The GE team successfully recorded micro-holographic marks approaching one percent reflectivity with a diameter of approximately one micron. When using standard DVD or Blu-ray disc optics, the scaled down marks will have sufficient reflectivity to enable over 500 GB of total capacity in a CD-size disc.

“GE's breakthrough is a huge step toward bringing our next generation holographic storage technology to the everyday consumer,” said Brian Lawrence, who leads GE's Holographic Storage program. “Because GE's micro-holographic discs could essentially be read and played using similar optics to those found in standard Blu-ray players, our technology will pave the way for cost-effective, robust and reliable holographic drives that could be in every home. The day when you can store your entire high definition movie collection on one disc and support high resolution formats like 3-D television is closer than you think.”

GE has been working on holographic storage technology for over six years. The demonstration of materials that can support 500 gigabytes of capacity represents a major milestone in making micro-holographic discs that ultimately can store more than one terabyte, or 1,000 gigabytes of data. In addition to pushing the limits of storage capacity, GE researchers also have been very focused on making the technology easily adaptable to existing optical storage formats and manufacturing techniques.

GE initially will be focusing on the commercial archival industry followed by the consumer market for its micro-holographic storage technology.

Full press release: http://www.genewscenter.com/Content/Detail.asp?ReleaseID6676&NewsAreaID2

We Need Publishing Standards for Datasets and Data Tables: OECD White Paper

On 20th April 2009, the Organization for Economic Co-operation and Development (OECD) released a white paper, “We need publishing standards for datasets and data tables,” which examines the problems with current data discoverability and citations and the remedy in creating industry standards for bibliographic dataset metadata and linking.

Written by Toby Green, Head of Publishing at OECD and an expert in data publishing, the paper details the problems with user ability to locate and reference online data. Datasets are a significant part of the scholarly record and being published much more frequently but with widely inconsistent metadata, links and citations.

The paper proposes bibliographic metadata standards that could be implemented to provide users and librarians with data that is as accessible and as easy to find and catalogue as written works like journal articles and book chapters. By following existing scholarly metadata standards, datasets can easily utilize the existing discovery channels that are used by e-journals and e-books, including library systems, cross reference linking, publishing platforms, and search engines.

The paper provides straightforward standards that publishers, librarians, and data providers can implement to improve the accessibility and usage of important datasets, both the data that underlies scholarly works and data in that is published in its own right.

Permanent URL for the white paper, which includes a summary of the standards proposed and an annex with the detailed proposal: http://dx.doi.org/10.1787/603233448430.

PEER Project Issues Draft Report for Publishers and Repository Managers

PEER (Publishing and the Ecology of European Research) is a pioneering collaboration between publishers, repositories, and the research community, which aims to investigate the effects of the large-scale deposit (so called Green Open Access) on user access, author visibility, journal viability and the broader European research environment. Supported by the EC eContentplus programme, the PEER project will run until 2011, during which time over 50,000 European stage-2 (accepted) manuscripts from up to 300 journals will become available for archiving.

In April 2009 PEER issued a draft report on the provision of usage data and manuscript deposit procedures for publishers and repository managers. The report sets out to establish a workflow for depositing stage-2 outputs in and harvesting log files from designated repositories to facilitate the research required for PEER.

To ensure that sufficient content is made available as a research sample to validate the research process, participating publishers have agreed to collectively deposit 50 per cent of the outputs on behalf of the authors. For the other 50 per cent, publishers will invite the authors to self-archive their current manuscripts, and any previous manuscripts from participating journals. In addition to workflow, the report identifies the preferred file formats for full text and metadata to be deposited by participating publishers as well as the preferred and mandatory metadata elements.

Issues of relevance to repositories are also addressed, including the proposal to unify the ingestion services based on either format used or protocols such as OAI-PMH or SWORD, as well as procedures for the provision of usage data.

An updated version of this draft report will be made available by PEER later in 2009.

PEER Draft report available at: www.peerproject.eu/reports/

Progress toward a Single Shared Format Registry: UDFR Proposal and Road Map

Following the delivery of the Global Digital Format Registry (GDFR) software in August 2008, Harvard University Library held a series of discussions and in-person meetings to get a sense of the current needs of the digital preservation community and to see if we still understood the requirements for a format registry. The top issue that emerged from these discussions was the relationship between PRONOM and the GDFR. While not everyone agreed, many we spoke to believed that the community could not support two format registries. After in-depth discussions with a number of concerned parties it was decided that it would be in the better interests of the community if there were a single shared formats registry.

In order to make progress towards a single shared registry model, a format registry working group was formed in late 2008 with members from the British Library, the California Digital Library, Harvard University Library, the National Archives, the National Library of Australia, the National Library of New Zealand, Portico, and Tessella. This group began to build a community around the idea of a single shared registry and solicited requirements for the registry from the community.

In the spring of 2009 NARA sponsored a meeting attended by the National Archives, Harvard University and other parties who had been working with NARA on its registry efforts. An agreement was forged to bring together the two registry efforts under a new name – the Unified Digital Formats Registry (UDFR). The registry would support the requirements and use cases of the larger community compiled for GDFR and would be seeded with PRONOM's software and formats database.

The UDFR proposal and 16-month road map is now available on the GDFR website. It calls for the immediate formation of an ad-hoc governing body in anticipation of the transfer of governance to a permanent body six months from now. Pam Armstrong, Manager of the Library and Archives Canada's Digital Repository Services and Standards Office, has agreed to chair the UDFR's interim governing body. In addition to the Library and Archives Canada, the interim group is composed of individuals from the National Archives, Harvard University Library, the British Library, the University of Illinois at Urbana-Champaign, Georgia Tech Research Institute, and NARA.

The plan calls for the establishment of a permanent governing body in November 2009. While specific requirements for membership in the UDFR will be worked out over the coming months, the intention is to make membership in the registry and representation in registry governance open to all institutions interested in and willing to contribute to the effort.

UDFR home page: www.gdfr.info/udfr.html

UDFR Proposal and 16-month road map: www.gdfr.info/udfr_docs/Unified_Digital_Formats_Registry.pdf

Fedora Commons and DSpace Foundation Merge

Fedora Commons and the DSpace Foundation, two of the largest providers of open source software for managing and providing access to digital content, announced in May 2009 that they will join their organizations to pursue a common mission. Jointly, they will provide leadership and innovation in open source technologies for global communities who manage, preserve, and provide access to digital content. DuraSpace will continue to support its existing software platforms, DSpace and Fedora, as well as expand its offerings to support the needs of global information communities.

The joined organization, named “DuraSpace,” will sustain and grow its flagship repository platforms – Fedora and DSpace. DuraSpace will also expand its portfolio by offering new technologies and services that respond to the dynamic environment of the web and to new requirements from existing and future users. DuraSpace will focus on supporting existing communities and will also engage a larger and more diverse group of stakeholders in support of its not-for-profit mission. The organization will be led by an executive team consisting of Sandy Payette (Chief Executive Officer), Michele Kimpton (Chief Business Officer), and Brad McLean (Chief Technology Officer) and will operate out of offices in Ithaca, New York and Cambridge, Massachusetts.

“This is a great development,” said Clifford Lynch, Executive Director of the Coalition for Networked Information (CNI). “It will focus resources and talent in a way that should really accelerate progress in areas critical to the research, education, and cultural memory communities. The new emphasis on distributed reliable storage infrastructure services and their integration with repositories is particularly timely.” Together Fedora and DSpace make up the largest market share of open repositories worldwide, serving over 700 institutions. These include organizations committed to the use of open source software solutions for the dissemination and preservation of academic, scientific, and cultural digital content.

The first new technology to emerge will be a web-based service named “DuraCloud.” DuraCloud is a hosted service that takes advantage of the cost efficiencies of cloud storage and cloud computing, while adding value to help ensure longevity and re-use of digital content. The DuraSpace organization is developing partnerships with commercial cloud providers who offer both storage and computing capabilities.

The DuraCloud service will be run by the DuraSpace organization. Its target audiences are organizations responsible for digital preservation and groups creating shared spaces for access and re-use of digital content. DuraCloud will be accessible directly as a Web service and also via plug-ins to digital repositories including Fedora and DSpace. The software developed to support the DuraCloud service will be made available as open source. An early release of DuraCloud will be available for selected pilot partners in Fall 2009.

DuraSpace will support both DSpace and Fedora by working closely with both communities and, when possible, develop synergistic technologies, services, and programs that increase interoperability of the two platforms. DuraSpace will also support other open source software projects including the Mulgara semantic store, a scalable RDF database.

More information is available at the DuraSpace website: http://duraspace.org/

OCLC Announces Web-Scale Library Management System

Online computer library center (OCLC) is connecting the content, technology and expert capabilities of its member libraries worldwide to create the first web-scale, cooperative library management service. Member libraries can take the first step to realizing this cooperative service model with a new, “quick start” version of the OCLC WorldCat Local service. Libraries that subscribe to FirstSearch WorldCat will get the WorldCat Local “quick start” service as part of their subscription at no additional charge. WorldCat Local “quick start” offers libraries a locally branded catalog interface and simple search box that presents localized search results for print and electronic content along with the ability to search the entire WorldCat database and other resources via the Web.

OCLC plans to release web-scale delivery and circulation, print and electronic acquisitions, and license management components to WorldCat Local, continuing the integration of library management services to create the Web-scale, cooperative library service. OCLC will begin piloting the web-scale management service components this year.

The new library service design will support library management for print, electronic and licensed materials built on a web-scale architecture that provides streamlined workflows and cooperative solutions. The web-scale solution will interoperate with third-party business process systems, such as finance and human resources, and will reduce the total cost of ownership for libraries. The cooperative nature of the platform will create network effects for libraries with enhanced discovery, resource sharing, and metadata management, and through sharing collection management information, identity management, and collective intelligence fueled by data shared through the cooperative and with partners.

OCLC will work with the more than 1,000 libraries and partners that are currently using OCLC library management systems in Europe and Asia Pacific to help build this service. OCLC will continue to develop and support its existing systems in Europe and Asia Pacific. OCLC will accelerate efforts to create robust data-exchange capabilities between OCLC library management systems and the WorldCat platform. Libraries and partners using current OCLC library management systems will be able to participate in this new development by adding web-based services to their local solutions to extend their services for end users.

In July 2009, libraries will be able to start using WorldCat.org as their user interface for the OCLC FirstSearch service, providing integrated access through a single search box to NetLibrary eBooks and eAudiobooks, Electronic Collections Online eJournals, OCLC FirstSearch databases, ArchiveGrid archival collection descriptions, and CAMIO (the Catalog of Art Museum Images Online). At the same time, OCLC will add an enhanced, comprehensive search capability to WorldCat Local, which will return all print, electronic, and licensed content available to the library from any location.

WorldCat Local quick start: www.oclc.org/us/en/worldcatlocal/quickstart/default.htm

OCLC web-scale Management Systems: www.oclc.org/us/en/productworks/webscale.htm

Panlibus interview with Andrew Pace on web-scale services: http://blogs.talis.com/panlibus/archives/2009/05/oclcs-andrew-pace-talks-with-talis-about-web-scale-ils.php

Online Catalogs: What Users and Librarians Want – Research Report from OCLC

In 2008, OCLC conducted focus groups, administered a pop-up survey on WorldCat.org – OCLC's freely available end user interface on the Web – and conducted a Web-based survey of librarians worldwide.

The Online Catalogs report presents findings from these research efforts in order to understand:

  • The metadata elements that are most important to end users in determining if an item will meet his or her needs.

  • The enhancements end users would like to see made in online library catalogs to assist them in consistently identifying appropriate materials.

  • The enhancements librarians would recommend for online library catalogs to better assist them in their work.

The findings indicate, among other things, that although library catalogs are often thought of as discovery tools, the catalog's delivery-related information is just as important to end users. The findings also suggest two traditions of information organization at work – one from librarianship and the other from the web. Librarians' perspectives about data quality remain highly influenced by their profession's classical principles of information organization, while end users' expectations of data quality arise largely from their experiences of how information is organized on popular websites. What is needed now is to integrate the best of both worlds in new, expanded definitions of what “quality” means in library online catalogs.

The report concludes with recommendations for a data quality program that balances what end users and librarians want and need from online catalogs, plus a few suggestions for further research.

Full text of the report: www.oclc.org/reports/onlinecatalogs/fullreport.pdf

Scriblio Social Library System Version 2.7 Released

Scriblio (formerly WPopac) is an award winning, free, open source CMS and OPAC with faceted searching and browsing features based on WordPress. As announced on the Scriblio home page in February 2009, Scriblio version 2.7 is now available at the WordPress plugins repository and SVN:

http://wordpress.org/extend/plugins/scriblio/

http://svn.wp-plugins.org/scriblio/tags/2.7-r1/

What's new in version 2.7:

  • An internal data model that supports original cataloging of books and archive items and has some semblance to MARC and other formats. This data model is in two parts: a generic framework for working with structured data in posts (the Meditor) as well as an implementation of it that works well for books and digital collections (the Marcish form). The Meditor framework can be easily extended with other forms that may be more appropriate to other types of objects.

  • That data model also supports the automatic merging of records from multiple sources (or multiple copies of the same record in a single source), allowing you to easily and quickly build union catalogs or asynchronously enrich your catalog from external sources. All the data in the merged record is fully indexed and faceted.

  • A refactored SQL query architecture that better leverages the WordPress APIs and should enable better interoperability with other plugins.

  • Internal support for representing the collection in a variety of forms. Only human-readable HTML is implemented now, but DC, RDF, MARC, or others could be easily implemented.

  • Better support for automating the relationship between Scriblio and external ILSs or other systems. The III harvester, for instance, automatically harvests new records, updates previously harvested records, and fetches real-time availability information.

Scriblio is open source: the software is licensed under the GPL, but it is also supported by the community. The mail list (http://groups.google.com/group/scriblio/) is a great place to ask questions or point out bugs, and there are likely to be a few bugs.

Scriblio is a project of Plymouth State University, supported in part by the Andrew W. Mellon Foundation.

(http://about.scriblio.net/)

eXtensible Catalog Webcast, Software Now Available

The eXtensible Catalog (XC) Project is working to design and develop a set of open-source applications that will provide libraries with an alternative way to reveal their collections to library users. XC will provide easy access to all resources (both digital and physical collections) across a variety of databases, metadata schemas and standards, and will enable library content to be revealed through other services that libraries may already be using, such as content management systems and learning management systems. XC will also make library collections more web-accessible by revealing them through web search engines.

According to recent updates from the XC blog, the XC team is currently in the final stages of making the XC software available. The XC OAI Toolkit and the XC NCIP Toolkit were released in March 2009 under the MIT license (www.opensource.org/licenses/mit-license.php). Development and functionality enhancement will continue on these toolkits after release, but new development will take place on a publicly accessible code repository.

The XC Metadata Services, Drupal, and Learning Management System Toolkits are in varying stages of the design and development process. The code for each of these toolkits will be made available via a publicly accessible code repository as well. The Drupal and LMS Toolkits will be released under the GPLv3: www.opensource.org/licenses/gpl-3.0.html) license.

A new webcast that describes the XC software is also available. This presentation includes video, audio, and animated slides describing all of the components of XC software, features, and architecture. In addition, recent screen shots of the XC Metadata Services Toolkit user interface are provided. The webcast is a continuous presentation broken up into six parts for ease of viewing. The total length is 70 min and each of the five XC applications is described in detail. The links to each segment are annotated with a brief text description of the contents.

XC webcast: www.screencast.com/users/eXensibleCatalog

XC home page (with link to XC blog): www.extensiblecatalog.org/

The CACAO Project: Cross-language Access to Catalogues and On-line Libraries

The CACAO Project offers an innovative approach for accessing, understanding and navigating multilingual textual content in digital libraries and library catalogues, enabling European users to better exploit the available European electronic content. The aim of CACAO is to provide an infrastructure to the end-user that enables him/her to type queries in his/her own language and retrieve documents and objects in any available language.

By coupling sound natural language processing techniques with available information retrieval systems and tools for facilitating the maintenance of multilingual resources, CACAO will deliver a non-intrusive infrastructure to be integrated with current library catalogues and digital libraries. As a result, the user will be able to type in queries in his/her own language and retrieve volumes and documents in any available language.

Such an infrastructure will be crucial to promote aggregation of content at the European level. Already during the lifetime of the project, CACAO will promote the aggregation of content, in particular:

  • The largest European aggregation of digital libraries, i.e. The European Library (www.theeuropeanlibrary.org/) will adopt the CACAO infrastructure.

  • The library catalogues of the five partner countries (France, Germany, Hungary, Italy, and Poland) will be aggregated into a single multilingual access point.

  • Three thematic portals aggregating several European collections (mathematics, medieval literature, geography) will be made available to the public.

Finally, the consortium will launch at month 12 of the CACAO project a start-up with the mission of marketing, selling and maintaining the cross-language access platform.

CACAO is a 24-month (December 2007-November 2009) targeted project supported by the eContentplus Programme of the European Commission.

CACAO home page: www.cacaoproject.eu/home/

NISO Announces New Work on Single Sign-on Authentication

The National Information Standards Organization (NISO) announced in April the approval by the NISO Voting Members of a new work item to focus on perfecting single-sign-on (SSO) authentication to achieve seamless item-level linking in a networked information environment. A new working group will be formed under the auspices of NISO's Discovery to Delivery Topic Committee to create one or more recommended practices that will explore practical solutions for improving the success of SSO authentication technologies and to promote the adoption of one or more of these solutions to make the access improvements a reality.

This work item is the outcome of NISO's new Chair's Initiative, an annual project of the chair of NISO's Board of Directors. NISO's current Chair, Oliver Pesch (Chief Strategist, EBSCO Information Services), has identified SSO authentication as an area that would benefit greatly from study and development within NISO, with a focus on a solution that will allow a content site to know which authentication method to use without special login URLs in order to provide a seamless experience for the user. Possible solutions include providing a generic mechanism for passing the authentication method from site to site; use of cookies to remember the authentication method that was used the last time the site was accessed by that computer; and/or providing a mechanism to discover if the user has an active session for one of the common SSO authentication methods. “By developing recommended practices that will help make the SSO environment work better [smarter],” said Pesch, “libraries and information providers will improve the ability for users to successfully and seamlessly access the content to which they are entitled.”

This new work follows on NISO's February 11th webinar on this topic, where the issues and potential benefits of SSO authentication were looked at from library, authentication tool, and content provider perspectives. The webinar was the first step in addressing the issue of SSO authentication; the new working group will enable all these perspectives to come together to focus on the topic as a community. NISO encourages those who would like to be a part of this new working group or to join the affiliated interest group to contact the NISO office at: www.niso.org/contact.

Draft of NISO proposed work item: SSO authentication: www.niso.org/apps/group_public/document.php?document_id1504

Resources from SSO authentication webinar: www.niso.org/news/events/2009/authentication09/resources

Ex Libris bX Brings Web 2.0 Recommender Services to the Scholarly World

Ex LibrisTM Group announced in May 2009 that the bX recommender service is now available to libraries worldwide, providing library users with recommendations for scholarly articles. Tapping into the power of the networked research community, the bX service generates recommendations based on the analysis of tens of millions of linking activities carried out by users at research institutions worldwide.

Web-savvy users are well accustomed to usage-based recommendations. Found on commercial websites such as Amazon.com, these recommendations have become highly popular with users, who continue to find them both relevant and valuable.

The bX recommender is a new service that taps into the power of the networked scholarly community to generate recommendations based on article usage. It represents the growing recognition of the importance of user-driven content and an important step in the convergence of Web 2.0 and the scholarly world. Focused solely on the scholarly domain, bX recommendations are based on actual usage data. It is the first service to provide highly granular recommendations that point to specific scholarly articles.

The bX service is the result of years of collaborative research into advanced scholarly recommender systems conducted by the Ex Libris bX team and leading researchers Johan Bollen and Herbert Van de Sompel from the Los Alamos National Laboratory. Based on data captured through a large-scale aggregation of link-resolver usage logs, bX is an extension of the OpenURL framework. “The scholarly information space is highly distributed, with resources scattered across multiple repositories that are predominately vendor controlled,” explained Oren Beit-Arie, Chief Strategy Officer at Ex Libris Group. “Just like SFX, bX is an overlay service that enables the information-seeking user to traverse scholarly resources in a manner that is completely independent of any proprietary constraints.”

Twenty research institutions, located in North America, Europe, Australia, Africa, and Asia, have already contributed their usage data to bX and tested the bX service with Ex Libris over the past several months through the bX early adopter program. Reflecting on the benefits offered by the bX service, Marvin Pollard of the California State University library consortium, which participated in the early adopter program, commented, “We are very enthusiastic about the bX recommender service. We view this service as an extremely important piece of the triangle of the discovery-recommendation-fulfilment process. This is the next ”killer app“ from Ex Libris and follows on the success of SFX. Just as SFX has become an essential, powerful tool in connecting our researchers to the resources that they need, we are confident that the bX service will provide our users with the recommendations that they need to support their research.”

To find out more about the bX service: www.exlibrisgroup.com/category/bXOverview

RefWorks Releases RefMobile

RefWorks announced in March 2009 the launch of RefMobile, a new interface that enables students and researchers to use the RefWorks web-based research management service from anywhere, via web-enabled mobile phones, smart phones, and personal data assistants (PDAs). The RefMobile interface gives users immediate access to the most commonly used RefWorks functions, including searching their entire RefWorks databases, viewing references by folders, adding and removing references from folders, creating new folders, and adding comments to notes fields. Users can also efficiently import new references to their RefWorks account using the new SmartAdd feature.

With SmartAdd, users simply enter basic identifying information for a publication, such as ISBN number, digital object identifier (DOI) number, partial title, or author and publication year, and SmartAdd searches the Internet for the reference and import it to RefWorks.

“Researchers and students are increasingly on the go, but need to stay highly productive,” says Colleen Stempien, Executive Director of Operations for RefWorks-COS. “RefMobile puts RefWorks' world-class research management tools at the fingertips of researchers whenever they need them, wherever they happen to be.”

View RefMobile screen shots: http://www.refworks-cos.com/GlobalTemplates/RefworksCos/refmobilescreenshots.shtml

RefWorks website: www.refworks.com/

DeepDyve Releases Suite of Search Tools for Publisher Websites

In April 2009 DeepDyve, formerly known as Infovell, unveiled a suite of tools for publishers and scientific societies of all sizes that want to enhance the search capabilities on their websites. The suite of tools leverages DeepDyve's KeyPhraseTM algorithm, which allows users to input whole sentences, paragraphs or entire articles as their query to find related results. The Public Library of Science (PLoS) is one of the first organizations to implement DeepDyve's search technology.

“Our vision is that search is becoming more sophisticated and more decentralized. Increasingly, users are initiating their research online and they want to have search integrated seamlessly with their reading and browsing behavior – in other words, they want their content to be their query for finding comprehensive answers to difficult questions. These tools give our partners the ability to make their content more findable and to demonstrate the breadth and depth of their collection,” said William Park, CEO of DeepDyve. “We're making available some of our most frequently used search capabilities to publishers that want to give their visitors a more compelling search experience.”

The products announced are designed to first make the publisher's content more discoverable in search engines which is where, according to a report from Outsell, more than 70 per cent of users begin their research. From there, other tools are available to increase the engagement at the publisher's site by allowing users to quickly find related articles based on what they are viewing.

DeepDyve's next-generation search technology is available to publishers via a web services API (Application Programming Interface). The API can be set up to search only a publisher's own content, or to help users discover other highly relevant documents in the DeepDyve index. Searches can be launched with a few keywords, or by allowing users to use a paragraph or an entire document as a query to find articles that match the concepts described. Results are returned via an XML feed or as a hosted, co-branded web page.

The DeepDyve more like this document API enables websites to directly interface with the DeepDyve database to search for articles that are similar to a designated document within the DeepDyve index. It is designed for use by publishers whose content has been indexed by DeepDyve and who would like to include a “related articles” functionality on their site without the painful implementation. The user may select any document to use as a query, and the title and body are compared to other documents in the DeepDyve index. The resulting documents can be limited to a publisher's own content, or may include other content in the DeepDyve index. Results are returned via an XML feed or as a hosted, co-branded web page.

The highlight widget enables users at the publisher site to simply highlight any block of text up to 5,000 characters, then run that selection as a query. DeepDyve returns only the Publisher's articles in the search results via an XML feed or as a hosted, co-branded page of results.

DeepDyve is a search engine that was developed to scour the depths of the so-called Deep Web, the vast collection of information-rich content that is largely overlooked by today's traditional search engines. Since the company's launch in September 2008, DeepDyve has worked closely with major publishers, building an index with hundreds of millions of pages that showcases content from the industry's most respected research organizations, academic institutions, and professional associations. The API tools are the next step in DeepDyve's vision for enabling publishers to better utilize the Internet to reach as large an audience as possible. Each of these tools is available for free with advertising revenue sharing, or for a fee, which varies depending on volume.

The PLoS website: www.plos.org

DeepDyve website: www.DeepDyve.com

Amazon Announces e-Textbook Friendly Kindle

In May 2009 Amazon.com, Inc. introduced the Amazon Kindle DX, the new purpose-built reading device that offers Kindle's wireless delivery and selection of content with a large 9.7-inch electronic paper display, built-in PDF reader, auto-rotate capability, and storage for up to 3,500 books. More than 275,000 books are now available in the Kindle Store, including 107 of 112 current New York Times Best Sellers. Top US and international magazines and newspapers plus more than 1,500 blogs are also available. Kindle DX is available for pre-order at: http://amazon.com/kindleDX and will ship this summer.

Kindle DX's display has 2.5 times the surface area of Kindle's 6-inch display. The larger electronic paper display with 16 shades of gray has more area for graphic-rich content such as professional and personal documents, newspapers and magazines, and textbooks. Kindle reads like printed words on paper because the screen works using real ink and does not use a backlight, eliminating the eyestrain and glare associated with other electronic displays.

Kindle DX's large display offers an enhanced reading experience with another category of graphic-rich content – textbooks. With complex images, tables, charts, graphs, and equations, textbooks look best on a large display. Leading textbook publishers Cengage Learning, Pearson, and Wiley, together representing more than 60 per cent of the US higher education textbook market, will begin offering textbooks through the Kindle store beginning this summer. Textbooks under the following brands will be available: Addison-Wesley, Allyn & Bacon, Benjamin Cummings, Longman & Prentice-Hall (Pearson); Wadsworth, Brooks/Cole, Course Technology, Delmar, Heinle, Schirmer, South-Western (Cengage), and Wiley Higher Education.

Arizona State University, Case Western Reserve University, Princeton University, Reed College, and Darden School of Business at the University of Virginia will launch trial programs to make Kindle DX devices available to students this fall. The schools will distribute hundreds of Kindle DX devices to students spread across a broad range of academic disciplines. In addition to reading on a considerably larger screen, students will be able to take advantage of popular Kindle features such as the ability to take notes and highlight, search across their library, look up words in a built-in dictionary, and carry all of their books in a lightweight device.

Kindle DX features a built-in PDF reader using Adobe Reader Mobile technology for reading professional and personal documents. Like other types of documents on Kindle, customers simply email their PDF format documents to their Kindle email address or move them over using a USB connection. With a larger display and built-in PDF reader, Kindle DX customers can read professional and personal documents with more complex layouts without scrolling, panning, or zooming, and without re-flowing, which destroys the original structure of the document.

Kindle DX's display content auto-rotates so users can read in portrait or landscape mode, or flip the device to read with either hand. Simply turn Kindle DX and immediately see full-width landscape views of maps, graphs, tables, images, and web pages. With 3.3 GB of available memory, Kindle DX can hold up to 3,500 books, compared with 1,500 with Kindle. And because Amazon automatically backs up a copy of every Kindle book purchased, customers can wirelessly re-download titles from their library at any time. Kindle DX is just over a third of an inch thin, which is thinner than most magazines. Just like Kindle,

Kindle DX customers automatically take advantage of Amazon Whispernet to wirelessly shop the Kindle store. Just like Kindle, Kindle DX uses Amazon Whispersync technology to automatically sync content across Kindle, Kindle DX, Kindle for iPhone, and other devices in the future. With Whispersync, customers can easily move from device to device and never lose their place in their reading.

On a separate but related note, Lexcyle, makers of the free ebook reader iPhone application Stanza, announced in April 2009 their acquisition by Amazon. For more information on Stanza: www.lexcycle.com/

Full Kindle DX press release: http://phx.corporate-ir.net/phoenix.zhtml?c176060&pirol-newsArticle&ID1285140&highligh

Internet Archive Releases New Book Reader Software

The Internet Archive has released a new book reader in beta. The new version of the book reader provides support for several critical new features. The reader is widgetable, so it can be embedded easily in blog posts or digital asset repository pages. It also support full text search against books, and the new reader can display extremely high resolution images, up to the limit of the archival scan. Finally, a highly desired feature: it supports books written in right to left languages, such as the new Yiddish collection, and certain CJK historical or display variants. And, it is open source.

The new reader can be chosen by selecting “Flip Book” from any book page.

Open Access Text Archive: www.archive.org/details/texts

Software download: http://github.com/openlibrary/bookreader/tree/master

Sony eBookstore Provides Access to a Half-million Free Books From Google

Starting in March 2009, the eBook store from Sony provides access to more than a half-million public domain books from Google optimized for current models of the Sony Reader.

At Sony's eBook store, a button on the front page leads to the books from Google, which people can transfer to their Sony Reader at no cost. The process is seamless for Reader owners who have an account at the store. Those new to the store will need to set up an account and download Sony's free eBook Library software. To start, people can access more than a half-million public domain books from Google, boosting the available titles from the eBook Store to more than 600,000.

“We have focused our efforts on offering an open platform and making it easy to find as much content as possible – from our store or others – whether that content is purchased, borrowed or free,” said Steve Haber, president of the Digital Reading Business Division at Sony Electronics. “Working with Google, we can offer book lovers another avenue for free books while still providing a seamless experience from our store.”

Books from Google will feature an extensive list of traditional favorites, including The Awakening, A Connecticut Yankee in King Arthur's Court, and Black Beauty, as well as a number of items that can be more difficult for people to access. For example, literature lovers can find and read The Letters of Jane Austen in addition to Sense and Sensibility and Emma. Also included are a number of titles in French, German, Italian, Spanish, and other languages. People can search the full text of the collection, or they can browse by subject, author, or featured titles.

“We founded Google Book Search on the premise that anyone, anywhere, anytime should have the tools to explore the great works of history and culture – and not just when they happen to be at a computer,” said Adam Smith, product management director. “We believe in an open platform for accessing and reading books, and we're excited to partner with Sony to help bring these public domain books to more people.”

The Reader Digital Book's high-resolution electronic paper display delivers a realistic print look that rivals traditional paper and uses minimal power. A single battery charge provides up to 7,500 pages of continuous reading. The ease of changing font sizes can make every eBook a large print book and enables libraries to improve accessibility for patrons with poor vision.

In addition to electronic books, the current Reader models support multiple file formats for personal documents and music, including PDF documents with reflow capability, Microsoft Word documents, BBeB files and other text file formats. The device can store and display EPUB files and work with Adobe Digital Editions software, opening it up to almost a limitless quantity of content.

http://ebookstore.sony.com/

Mellon Grant Funds Digital Initiative to Push the Boundaries of the Scholarly Monograph

Six university presses will benefit from a $282,000 grant to develop a digital collection of archaeology scholarship that is taking place across the Americas. The one-year planning grant from the Andrew W. Mellon Foundation will fund the “Archaeology of the Americas Digital Monograph Initiative,” which is stated to give scholars and professional archaeologists the ability to review data not commonly found in conventionally published volumes.

“This initiative will push the boundaries of the scholarly monograph,” said Darin Pratt, director of the University Press of Colorado. “To date, most digital publication has been the simple replication of print books in PDF or HTML format.”

The University Press of Colorado will administer the planning grant, which will fund a shared project manager. Collaborating university presses are the Texas A&M University Press, the University of Alabama Press, the University of Arizona Press, the University Press of Florida and the University of Utah Press.

“This initiative enables each press to break out of the traditional monograph form, in which it is often financially impossible to offer digital resources alongside the book,” said Kathryn Conrad, Interim Director of the University of Arizona Press. Like scholarly books in other humanities fields, sales of archaeology title remain limited. Presses must enforce strict length and image limitations to constrain production costs. The books produced as part of this initiative will be enhanced by large data sets, color illustrations, video components, three-dimensional, rotatable images, and in some cases, interactive components such as reader commenting. Ultimately, the digital platform could “quite possibly change the way scholarly resources are produced in the future,” Conrad said.

If the program reaches full implementation, the presses could create a third-party entity devoted to the creation and maintenance of the digital platform. The presses also plan to work on a business model for the proposed platform.

In addition, the collaborating presses plan to develop a prototype digital book, providing a workable platform that could be used by scholarly presses around the world.

Full press release: www.uapress.arizona.edu/news/MellonArchaeology.php

Grant Provides Support for the Development of a Music Notation Data Model

The University of Virginia Library and the University of Paderborn announced the receipt of a grant jointly funded by the US National Endowment for the Humanities (NEH) and the German Research Foundation (Deutsche Forschungsgemeinschaft e.V., DFG).

The $77,065 grant will support the development of a music notation data model and encoding scheme for music scholars, publishers, and performers. In addition to the common notation functions of traditional facsimile, critical and performance editions, the encoding scheme will provide for the capture of a composition's textual variants and their origins. Textual matter, very important to the understanding of a composition in its historical and cultural contexts, will also be accommodated.

The grant will support two workshops that will result in guidelines that can be widely used by libraries, museums, and individual scholars who engage in online research, teaching, and preservation of cultural objects. The international work group is made up of musicologists, specializing in notational styles from medieval to twenty-first century music, and technologists, with skills in music representation, schema design, optical music recognition, and software development.

Digitized sheet music will be searchable not only by title, date, and publisher, but also by song lyrics, specific note progression, genre, composer notes, and even publisher advertisements. Scholars wishing to examine multiple interpretations of a score – such as a medieval piece written in modern style – may do so with a mouse click, bypassing hours of research or notation transferring.

The expected completion date of the project is 31 July 2010.

Full press release with examples: www2.lib.virginia.edu/press/music/index.html

Indiana University Announces Release of IN Harmony Sheet Music Software and Website

Indiana University's Digital Library Program has announced the release of IN Harmony: Sheet Music from Indiana. This website provides access to thousands of pieces of sheet music from the Indiana State Library, the Indiana State Museum, the Indiana Historical Society, and the Indiana University Lilly Library and was developed by the Indiana University Digital Library Program with funding from the Institute of Museum and Library Services. Drawn primarily from the late nineteenth and early twentieth centuries, the collection includes works by well-known composers such as George M. Cohan, Cole Porter, Al Jolson, and Jerome Kern. The sheet music collections of the Indiana State Library, Indiana State Museum, and the Indiana Historical Society have been completely digitized. Thousands of items from the Lilly Library sheet music collections are currently part of the site. Work continues on creating records for more of the approximately 150,000 pieces held by the Lilly Library. As the records are completed they will be systematically added to the site.

As part of the IN Harmony: Sheet Music from Indiana IMLS project, a sheet music cataloging tool was developed to assist libraries, archives, museums, and individual collectors describe their sheet music collections in a robust and standards-based way. This is a production system of the Indiana University Digital Library Program and was used to catalog more than 10,000 pieces of sheet music for the IN Harmony project. The tool collects descriptive metadata about sheet music and exports it in the MODS, simple Dublin Core, and OAI-PMH Static Repository formats. It is available under a BSD license from the IN Harmony Cataloging Tool SourceForge Project in three formats: as a Windows installer, as a MacOS installer, and as source code.

IN Harmony website: www.dlib.indiana.edu/collections/inharmony/

IN Harmony cataloging software download: http://inharmonycat.sourceforge.net

World Digital Library Launches

In April 2009, the United Nations Educational, Scientific and Cultural Organization (UNESCO), and 32 partner institutions launched the World Digital Library, a website that features unique cultural materials from libraries and archives from around the world. The site includes manuscripts, maps, rare books, films, sound recordings, prints, and photographs. It provides unrestricted public access, free of charge, to this material. The launch took place at UNESCO Headquarters at an event co-hosted by UNESCO Director-General, Koïchiro Matsuura, and Librarian of Congress, James H. Billington. Directors of the partner institutions were on hand to present the project to ambassadors, ministers, delegates, and special guests attending the semi-annual meeting of UNESCO's Executive Board.

Billington first proposed the creation of a World Digital Library (WDL) to UNESCO in 2005, remarking that such a project could “have the salutary effect of bringing people together by celebrating the depth and uniqueness of different cultures in a single global undertaking.” Matsuura welcomed the proposal as a “great initiative that will help to bridge the knowledge divide, promote mutual understanding and foster cultural and linguistic diversity.” In addition to promoting international understanding, the project aims to expand the volume and variety of cultural content on the internet, provide resources for educators, scholars, and general audiences and narrow the digital divide within and between countries by building capacity in partner countries.

The WDL functions in seven languages – Arabic, Chinese, English, French, Portuguese, Russian, and Spanish – and includes content in more than 40 languages. Browse and search features facilitate cross-cultural and cross-temporal exploration on the site. Descriptions of each item and videos, with expert curators speaking about selected items, provide context for users and are intended to spark curiosity and encourage both students and the general public to learn more about the cultural heritage of all countries.

The WDL was developed by a team at the Library of Congress. Technical assistance was provided by the Bibliotheca Alexandrina of Alexandria, Egypt. Institutions contributing to the WDL include national libraries and cultural and educational institutions in Brazil, Egypt, China, France, Iraq, Israel, Japan, Mali, Mexico, Morocco, the Netherlands, Qatar, the Russian Federation, Saudi Arabia, Serbia, Slovakia, South Africa, Sweden, Uganda, the United Kingdom, and the United States.

The National Library of China (NLC) contributed manuscripts, maps, books, and rubbings of steles and oracle bones that span the range of Chinese history from ancient to modern times. “The WDL project offers a brand-new platform for showcasing the diversity of the world's civilizations,” said Dr. Furui Zhan, Chief Librarian of the NLC. “This endeavour enables cultural exchange while bringing together different countries and peoples in mutual understanding and enrichment. The spirit of equality and open understanding comes into full view with the creation of this WDL. The NLC is ready to work in close cooperation with the WDL, continuing to promote in concert the prosperity and progress of all human civilizations.”

Examples of other treasures featured include Arabic scientific manuscripts from the National Library and Archives of Egypt; early photographs of Latin America from the National Library of Brazil; the “Hyakumanto darani,” a publication from A.D. 764 from the National Diet Library of Japan; the famous 13th century “Devil's Bible” from the National Library of Sweden; and works of Arabic, Persian, and Turkish calligraphy from the collections of the Library of Congress.

Ahead of the launch, Matsuura invited UNESCO member states to encourage their cultural institutions to participate in the development of the project. He noted that their participation would contribute to a truly universal digital library that showcases the cultural heritage and achievements of all countries. Matsuura also highlighted the synergies between this initiative and UNESCO's Memory of the World Program, noting that the WDL should help provide public access to digital versions of collections on the Memory of the World register.

WDL website: www.wdl.org

DLF Merges with CLIR

The Board of the Council on Library and Information Resources (CLIR) voted in April 2009 to merge the Digital Library Federation (DLF) into CLIR as a program of the Council, starting July 1, 2009. The vote follows recommendations by a DLF Review Committee in March 2009 to merge the two organizations, and a unanimous vote of consent by the DLF Board on April 8. With the merger, DLF's current members will become “charter sponsors” of the DLF program at CLIR. CLIR will hire a program officer to lead DLF initiatives. CLIR will continue to convene forums and will also convene special thematic sessions, with a goal of more in-depth exploration of collaborative activities. A transition committee drawn from the CLIR and DLF boards will guide the initial stages of the merger; the DLF Board will also nominate two members to serve on the CLIR Board. The new members' terms will start in July; they will run for three years and are subject to renewal.

Additionally, the DLF Aquifer Metadata Working Group released a brief report summarizing the Working Group's activities through the life of the DLF Aquifer initiative, reflecting on the impact and effectiveness of these activities, and suggesting some directions future similar initiatives might explore. “Advancing the State of the Art in Distributed Digital Libraries: Accomplishments of and Lessons Learned from the DLF Aquifer Metadata Working Group” can be found online at: https://wiki.dlib.indiana.edu/confluence/download/attachments/28330/lessonsLearnedMWG.pdf.

Founded by 16 institutions in 1995 as a project of CLIR, DLF's mission was to “enable new research and scholarship of its members, students, scholars, lifelong learners, and the general public by developing an international network of digital libraries.” Membership had since grown to 42, including several international institutions. In 2005, DLF became an independent organization but continued to work closely with CLIR. In recommending the merger, the Review Committee cited a maturing of the digital landscape, as well as the economic efficiencies of consolidating the two organizations and the potential added value of leveraging the programmatic strengths of each.

DLF website: www.diglib.org/

CLIR website: www.clir.org/

CRL Undertakes Assessment of Portico and HathiTrust

In 2009 the Center for Research Libraries (CRL) will undertake in-depth assessments of two repositories of interest to the library community: Portico and HathiTrust. The purpose of this is to promote understanding of and, where justified, confidence in, digital repositories. In today's economic climate, libraries must realize the greatest possible return on their investment in electronic scholarly resources and digital preservation services, and must move more aggressively to reduce the costs of redundant print holdings.

Portico has agreed to cooperate with the CRL audit, with the goal of certification as a trustworthy digital repository. HathiTrust has asked CRL to assess its digital repository, which includes not only Google Books digitization content but a considerable amount of non-Google content as well. Concurrently CRL is working with LOCKSS to assess the capabilities of the LOCKSS system for harvesting and archiving digitized primary source materials and related metadata. CRL is also gathering information about regional efforts to host licensed digital content locally.

CRL will report its findings at a workshop in November 2009. The CRL workshop will provide an opportunity to share useful information about the costs, benefits and risks of the various cooperative repositories and preservation services.

To guide its assessments CRL has formed a panel of advisors who represent the various sectors of its membership. The Certification Advisory Panel will ensure that the certification process addresses the interests of the entire CRL community, and will include leaders in collection development, preservation, and information technology. Its members are:

  • Martha Brogan (Chair), Director of Collection Development & Management, University of Pennsylvania

  • Winston Atkins, Preservation Officer, Duke University

  • Bart Harloe, Director of Libraries, St Lawrence University

  • William Parod, Senior Repository Developer, Northwestern University Libraries

  • Anne Pottier, Associate University Librarian, McMaster University

  • Oya Y. Rieger, Associate University Librarian for Information Technologies, Cornell University

  • Perry Willett, Digital Preservation Services Manager, California Digital Library

The work of the panel will be particularly timely. The economic downturn is forcing library directors to confront the formidable and growing costs of managing physical collections. Most libraries now face difficult decisions about acquiring and maintaining physical collections. Preserving and maintaining shared physical collections at CRL will continue to benefit libraries, as it has for the past 60 years. Certification will augment CRL's strategic archiving of print, and support a responsible transition to electronic-only formats where appropriate. Toward that end, CRL is working with the University of California to design a shared print journal archiving effort that suits CRL member needs and means. A further call for participation in that project will be issued in the near future.

Trustworthy Repositories Audit and Certification checklist (TRAC): www.crl.edu/content.asp?l113&l258&l3162&l491

CRL website: www.crl.edu

Three Projects Honored for Innovative Thinking in Resource Sharing

Officials with the Rethinking Resource Sharing Initiative have honored three projects for innovative thinking in resource sharing, each recipient being recognized for improving patrons' access to library information through resource sharing.

The Initiative presented awards to the Orlando Memory Project, a digital archive and social networking community where the users select and contribute content; Rapid ILL, a collaborative article requesting and delivery system; and Kentucky Libraries Unbound, a digital collection of local history materials made available via OverDrive.

Each of the three award winners will receive $1,000 and will be recognized for their resource sharing efforts on May 13, 2009 at the Rethinking Resource Sharing Forum 2009, in Dublin, Ohio.

Funding for the 2009 Innovation Awards is provided by the Alliance of Library Service Networks (www.librarynetworks.org), a group of US independent regional networks that includes Amigos, BCR, FEDLINK, ILLINET, INCOLSA, MINITEX, MLC, MLNC, NELINET, Nylink, OHIONET, Lyrasis, and WiLS. The Nebraska Library Commission is also a member. OCLC and BCR provide on-going support for the initiative. The Rethinking Resource Sharing Initiative is an ad hoc group that advocates for a complete rethink of the way libraries conduct resource sharing in the context of the global internet revolution and all of the developments that have arisen from that. The group is advocating for a revolution in the way libraries conduct resource sharing.

Orlando Memory Project: http://dc.ocls.info/

Rapid ILL: https://rapid2.library.colostate.edu/Default.aspx

Kentucky Libraries Unbound: http://kyunbound.lib.overdrive.com/

Library of Congress Launches YouTube Channel

In April 2009, the Library of Congress launched a pilot project on the online video portal YouTube to offer selected items from its collections of early motion pictures, along with recordings of Library-sponsored events, lectures and concerts. Through this pilot, the library – home to over 1.2 million film, television, and video items – is sharing items from the library's collections with people who enjoy video but might not visit the library's own website (similar to one the goals of the library's Flickr pilot, which focused on photographs.)

To view the video on YouTube, go to the Library of Congress channel at: http://www.youtube.com/loc. You do not need a YouTube account to watch or embed the videos you find there; you would need to sign up for a free account to subscribe to the channel. All library content that can be accessed on our YouTube channel is available on the Library of Congress website, loc.gov; which is always the primary source for our digitized collections.

The library's YouTube channel features a variety of video presentations to introduce the YouTube community to the breadth of online video content available from the library. The library launched with six playlists on the YouTube channel: historic films from the American Memory collection, and a variety of library webcasts, including author presentations from 2008 National Book Festival and the Center for the Book Books and Beyond series, curator discussions from the Journeys and Crossings series and lectures by John W. Kluge Center scholars. Look for more playlists in the future as additional video from these and other collections are added each month.

Library of Congress on YouTube: www.youtube.com/loc

YouTube pilot on the Library of Congress blog: www.loc.gov/blog/?p467

Related articles