New & Noteworthy

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 3 August 2012


(2012), "New & Noteworthy", Library Hi Tech News, Vol. 29 No. 6.



Emerald Group Publishing Limited

Copyright © 2012, Emerald Group Publishing Limited

New & Noteworthy

Article Type: New & Noteworthy From: Library Hi Tech News, Volume 29, Issue 6

“Big Data”: new report provides first appraisal of digging into data challenge

In June 2012, the Council on Library and Information Resources (CLIR) issued the first public appraisal of the digging into data challenge, an international grant program first funded by the US National Endowment for the Humanities (NEH), the US National Science Foundation, the Joint Information Systems Committee (JISC) in the UK, and the Canadian Social Sciences and Humanities Research Council.

The report, One Culture: Computationally Intensive Research in the Humanities and Social Sciences, was made public at the Joint Conference on Digital Libraries JCDL 2012 conference in Washington, DC.

The digging into data challenge was launched in 2009 to better understand how “big data” changes the research landscape for the humanities and social sciences. Scholars in these disciplines now use massive databases of materials that range from digitized books, newspapers, and music to transactional data such as web searches, sensor data, or cell phone records. The challenge seeks to discover what new, computationally based research methods might be applied to these sources.

In its first year, the digging into data challenge made awards to eight teams of scholars, librarians, and computer and information scientists. Over the following two years, report authors Christa Williford and Charles Henry conducted site visits, interviews, and focus groups to understand how these complex international projects were being managed, what challenges they faced, and what project teams were learning from the experience.

Their findings are presented in one culture, along with a series of recommendations for researchers, administrators, scholarly societies, academic publishers, research libraries, and funding agencies. The recommendations are “urgent, pointed, and even disruptive,” write the authors. “To address them, we must recognize the impediments of tradition that hinder the contemporary university’s ability to adapt to, support, or sustain this emerging research over time”.

Brett Bobley, Chief Information Officer and Director of the NEH office of digital humanities, heads the digging into data challenge. “Do we have big data in the humanities and social sciences? Yes – buckets of it,” he says: “But our ability to produce huge quantities of digital data has outstripped our ability to analyze and understand it. One Culture helps us to see not only why we would want a computer to assist us with our work, but how big data is changing the very nature of traditional humanistic research.”

Co-author and CLIR President Charles Henry said: “This report discloses the complexity and sophistication of humanities and social sciences research in a digital era. It underscores the excitement and potential of new discovery through deep collaboration across disciplines and affirms the continuity of traditional values and perspectives of scholarly communication in a data-dependent milieu. The report also seeks to animate a collective responsibility to more concertedly appreciate, extend, fund, and provide adequate services to sustain this remarkable research.”

In 2011, four additional funding bodies joined the four original cooperating agencies in support of 14 new international collaborative research projects. These funders include the Institute of Museum and Library Services (USA); the Arts and Humanities Research Council (UK); the Economic and Social Research Council (UK); and the Netherlands Organisation for Scientific Research.

JISC Director Stuart Dempster said: “We are proud to be a partner in this trans-Atlantic endeavor which aims to assist individual researchers, academic departments, and research institutions to succeed with the ‘data deluge’ in the humanities. For the UK to continue to punch above its weight in terms of digital scholarship and research it is vital for it to collaborate in ‘smart partnerships’, which foster innovation in the development of tools, skills, and new research findings. This report shows that success in action.”

“The CLIR report is an excellent assessment of this unique and exciting international partnership,” said Gisèle Yasmeen, Vice-President, Research, at the Social Sciences and Humanities Research Council. “The digging into data challenge project is generating innovative computation and data analysis techniques to better advance research and we look forward to its continued success”.

“NSF has found the Digging into Data Challenge to be an excellent mechanism for enabling collaborative, data-intensive research in the social sciences and humanities,” said Elizabeth Tran, program officer in NSF’s office of International Science and Engineering. “It has significantly reduced some the key barriers to conducting research across borders and has resulted in a number of truly international outstanding research projects”.

The report is available online in pdf format at:

Case studies, not included in the print version, are also available in html format at the same url. Print copies will soon be available for ordering through the web site.

Thomson Reuters unveils Data Citation Index for discovering global data sets

At the American Library Association Conference (ALA) in Anaheim, CA, the Intellectual Property & Science division of Thomson Reuters previewed the Data Citation IndexTM. The Data Citation Index is an upcoming research resource within the web of knowledgeSM to facilitate the discovery, use and attribution of data sets and data studies, and link those data to peer-reviewed literature.

This new research resource from Thomson Reuters creates a single source of discovery for scientific, social sciences and arts and humanities information by connecting foundational research within data repositories around the world to related peer-reviewed literature in journals, books, and conference proceedings already indexed in the web of knowledge.

The Thomson Reuters Data Citation Index, scheduled for release later in 2012, makes research within the digital universe discoverable, citable and seamlessly linked to the article detailing the outputs from the original investigation. Thomson Reuters has partnered with data repositories such as the Inter-University Consortium for Political and Social Research (ICPSR) to capture bibliographic records and cited references for digital research, facilitating visibility, author attribution, and ultimately the measurement of impact of this growing body of scholarship.

“We are excited to partner with Thomson Reuters in the building of the Data Citation Index,” said Mary Vardigan, Assistant Director of the ICPSR. “By linking publications in the web of science to the datasets on which they are based and enhancing the discoverability of data through the Data Citation Index, Thomson Reuters is highlighting the importance of research data in the scientific process”.

“The Data Citation Index will revolutionize the way data sets are discovered and utilized,” said Keith MacGregor, Executive Vice President of Thomson Reuters.

“It will enable researchers, institutions and funders to gain a comprehensive view into the origins of research projects and influence the future paths they take, while also eliminating the duplication of work and speeding the scientific research process.”

Data Citation Index:

Thomson Reuters web of knowledge:

Frontiers, challenges, and opportunities for information retrieval: report from SWIRL 2012

SIGIR, the Special Interest Group on Information Retrieval of the Association for Computing Machinery (ACM), addresses issues ranging from theory to user demands in the application of computers to the acquisition, organization, storage, retrieval, and distribution of a broad range of unstructured data including text, images, video, audio, and recorded speech. SIGIR sponsors its own conference on research and development in information retrieval and co-sponsors several other conferences and workshops.

During a three-day Strategic Workshop on Information Retrieval in Lorne, Victoria, in February 2012, 45 Information Retrieval researchers met to discuss long-range challenges and opportunities within the field. The result of the workshop is a diverse set of research directions, project ideas, and challenge areas.

The most recent issue of the SIGIR Forum newsletter contains a summary report from SWIRL 2012. This report describes the workshop format, provides summaries of broad themes that emerged, includes brief descriptions of all the ideas, and provides detailed discussion of six proposals that were voted “most interesting” by the participants. Key themes include the need to: move beyond ranked lists of documents to support richer dialog and presentation, represent the context of search and searchers, provide richer support for information seeking, enable retrieval of a wide range of structured and unstructured content, and develop new evaluation methodologies. The authors/participants hope you find the report thought-provoking and hope it will serve as the foundation for additional workshops, exciting research projects, interesting grant proposals, and so on.

The workshop site ( has information about the workshop and a link to the report (citation: James Allan, Bruce Croft, Alistair Moffat, and Mark Sanderson (eds.), “Frontiers, challenges, and opportunities for information retrieval: report from SWIRL 2012, SIGIR Forum, 46(1), 2-32, June 2012). The report is also directly accessible via:

Large-scale deposit in repositories increases access and use: statement from OpenAIRE

OpenAIRE, a European initiative co-funded by the European Commission (EC), welcomes the results of the PEER project, presented on the 29 May in Brussels. publishers, research libraries and research organisations effectively collaborated in building a controlled research environment to study the effects of green open access. Usage research in this so-called “PEER Observatory” revealed that large-scale deposit of research articles results in increased access and use, including via the publisher web site.

Norbert Lossau, Scientific Coordinator of OpenAIRE and member of the PEER Executive, pointed out that “the economic research of the PEER project could not find any evidence for the hypothesis that self-archiving affects journal viability”. He called upon publishers, libraries and repositories to re-use the PEER-infrastructure for large-scale publisher-/library-assisted deposit of research articles.

“Re-using the PEER infrastructure and stepping up the transition from subscription to gold open access journals will provide comfortable ways for researchers to comply with the important open access mandate of the European Commission which we expect to be expanded in Horizon 2020.”

OpenAIRE builds up a Pan-European publication infrastructure, bringing together 33 European countries to provide open access to European research results. It collects publications resulting from EC-funded projects with the aim of improving the visibility of European research, and supports the EC’s open access pilot. Future services deployed by OpenAIRE will include the support of statistics and the creation of complex publications linking from articles to research data. OpenAIRE collaborates with publishers, repositories and data providers in order to enable seamless integration of European research into global knowledge infrastructures.

The European OpenAIRE initiative held on the 11 June the first workshop in a series related to “research data linked to publications” in conjunction with the Nordbib Conference in Copenhagen. The workshop covered research data policies, implementation from institutional and funder perspectives, and cross-linking from publications to associated data sets.

OpenAIRE took this opportunity to agree on a joint statement responding to the results of the recently finalized European PEER project. The statement is supported by UK RepositoryNet.

For more information visit:

Bibliotheca to support open source Ebook model

Bibliotheca, a leading global developer and supplier of technologies designed to enhance library efficiency and the user experience, is partnering with the library community to facilitate adoption of open source platforms for the delivery of electronic content. The company will build upon the concepts originally designed and developed by the Douglas County Libraries, CO (DCL) to enable libraries, first in North America and then around the globe, to meet the many challenges that the emerging world of eBooks presents.

Monique Sendze, Associate Director of Information Technology at DCL, will be joining Bibliotheca to lead a new Bibliotheca eBook division. “My heart belongs to libraries and I have devoted many of my waking hours over the past two years to making DCL’s eBook solution a reality. I am excited by the opportunity to take what I’ve learned at DCL to make eBooks more affordable and user friendly for libraries and their patrons throughout the world.”

Objectives of Bibliotheca’s eBook model include lower acquisition costs through a library cooperative buying platform, seamless integration with all major ILS to create a user-friendly environment for patrons, and expansion of digital content made available by publishers to libraries. Bibliotheca looks to help the library community achieve these objectives through installation, customization, technical support and hosting services.

“We at DCL came to the conclusion that the existing eBook models did not present a viable long term eBook solution, functionally or financially,” states Director Jamie LaRue.

“The DCL model is predicated on ownership, discounts, and integration. Bibliotheca’s eagerness to test and take our model to the global market is significant – a game-changer. It has the potential to restore purchasing power to libraries, predictability and fairness to the publishing environment, and greater access to content for our public.”

Shai Robkin, President of Bibliotheca in the Americas states: “We have been paying close attention to the frustrations libraries have expressed with the existing eBook delivery models while, at the same time, watching the exciting work being done at DCL. While our focus to date has been on providing libraries with the best tools possible for managing their physical collections, we are uniquely positioned to provide them with robust e-content delivery solutions. In this context, we are also excited by the potential to integrate eBook delivery into our existing self-service kiosks.”

For more information on eBooks, visit:

OverDrive introduces OverDrive Read™ browser-based eBook reader

Global eBook distributor OverDrive has announced plans to launch later this year a new eBook reading platform, “OverDrive Read”. Based on open standards HTML5 and EPUB, OverDrive Read creates a fresh, direct and immersive reading experience offering significant benefits for publishers, booksellers, libraries and schools. Unlike eBook apps or devices, OverDrive Read enables readers using standard web browsers to enjoy eBooks online and offline without first installing any software or activating their device.

Based on the best-of-breed technology developed by recently-acquired Australian eBook firm, OverDrive read will provide new options for millions of readers who access eBooks from OverDrive’s global network of retail, library and school catalogs. Browser-based eBooks improve discovery and social options for authors and publishers to more directly connect with readers. To that end, OverDrive read creates a URL for each title where preview, review copies, browsing and sampling can be widely and easily promoted. OverDrive read supports both online and offline reading with configurable, industry-approved copyright protection for eBooks.

As with other browser-based systems, OverDrive read will enable publishers, authors and retailers to benefit from more direct engagement with readers and to gather data about how users are discovering, browsing and selecting eBooks and catalogs through OverDrive global channels. “OverDrive read’s use of open web standards will enable online communities to accelerate discovery and socialization of eBooks,” said Erica Lazzaro, OverDrive’s director of publisher relations. “It will enable OverDrive’s catalog of premium eBooks from over 1,000 publishers to be easily integrated across retail, school and library catalogs for standard computers and connected mobile devices”.

When launched, OverDrive read will complement OverDrive’s support for EPUB and PDF eBooks on a broad range of dedicated e-Ink eReaders, smartphones, tablets and computers. OverDrive also provides distribution of eBooks and audiobooks using OverDrive® Media Console™; the application has been installed more than 15 million times on PC, Mac®, iPhone®, iPad®, Android™, BlackBerry®, and Windows® Phone devices. OverDrive read will be available through all of OverDrive’s extensive eBook catalogs available to its global network of booksellers, OEMs and other resellers – as well as through public, school and corporate libraries – in more than 20 countries.

With a catalog of more than 800,000 eBooks and audiobooks, OverDrive provides digital distribution services for more than 18,000 libraries, schools and retailers worldwide with support for major desktop and mobile platforms listed above, as well as Kindle® (US only), Sony® Reader and NOOK™.

To find a bookseller or library in the OverDrive network, visit:

Recommended practices for demand-driven acquisition (DDA) of monographs: NISO launches new initiative

The national information standards organization (NISO) voting members have approved a new project to develop recommended practices for the demand-driven acquisition (DDA) of monographs. Many libraries have embraced DDA (also referred to as patron-driven acquisition) to present many more titles to their patrons for potential use and purchase than would ever be feasible under the traditional purchase model. If implemented correctly, DDA can make it possible to purchase only what is needed, allowing libraries to spend the same amount of money as they previously spent on monographs, but with a higher rate of use. However, this model requires libraries to develop and implement new procedures for adding titles to a “consideration pool,” for keeping unowned titles available for purchase for some future period, often years after publication, for providing discovery methods of titles in the pool, establishing rules on when a title gets purchased or only temporarily leased, and how potential titles are discovered, and for handling of multiple formats of a title.

“DDA is a significant disruption in the existing supply chain for monographs,” explains Michael Levine-Clark, Collections Librarian and Professor at Penrose Library, University of Denver. “Not only for libraries but also for publishers, sales agents, aggregators, and end-users. New roles and practices need to be shaped in a way that allows the scholarly communication supply chain to continue to function effectively.”

“Most DDA to date has focused on ebooks,” states Kathleen Folger, Electronic Resources Officer at the University of Michigan and the outgoing chair of the NISO business information topic committee. “However, some programs already encompass print books and there is increased interest in libraries in using DDA across formats. The new NISO initiative will explore recommendations, hopefully with a single set of practices, that will cover both electronic and print formats.”

“Most libraries that have experimented with DDA have been in the academic sector,” states Todd Carpenter, NISO Executive Director. “NISO intends to involve the public library community with this project and develop recommendations that can work for all library types”.

Individuals interested in participating in this working group should contact Nettie Lagace, Associate Director for Programs ( An interest group list for this project will be available for those who would like to receive updates on the Working Group’s progress and provide feedback to the group on its work. To subscribe, send an e-mail to

More information about NISO:

NISO publishes updated recommended practice on SERU: a shared electronic resource understanding

NISO has announced the publication of a new edition of the recommended practice “SERU: A Shared Electronic Resource Understanding” (NISO RP-7-2012). The SERU Recommended Practice offers a mechanism that can be used as an alternative to a license agreement by expressing commonly shared understandings between content providers and libraries. These understandings include such things as the definition of authorized users, expectations for privacy and confidentiality, and online performance and service provisions. The 2012 updated version of SERU recognizes both the importance of making SERU more flexible for those who want to expand its use beyond e-journals, while acknowledging the fact that consensus for other types of e-resource transactions are not as well-established as they are for e-journals.

“The 2008 version of SERU was eagerly adopted by a number of libraries and publishers to streamline the acquisition of e-journals,” states Selden Lamoureux, E-Resources Librarian with SDLinforms and Co-chair of the NISO SERU Standing Committee: “Since then, with the many emerging models for acquiring ebooks, both libraries and ebook providers have requested that other types of electronic resources be incorporated into the SERU framework. This new version uses language that can be applied to a wide variety of e-resources while retaining the same shared understandings that made the previous version so useful.”

“SERU offers publishers and libraries the opportunity to save both the time and the costs associated with a negotiated and signed license agreement by agreeing to operate within a framework of shared understanding and good faith,” explains Judy Luther, President of Informed Strategies and Co-chair of the NISO SERU Standing Committee.

“SERU reflects some well-established and widely accepted common expectations concerning e-resources acquisitions. In those instances where there is as yet no standard expectation, a shared understanding may still be achieved if expectations are clearly articulated in the purchase order that accompanies SERU.”

“Widespread adoption of the SERU model for electronic resource transactions offers substantial benefits to both publishers and libraries by removing the overhead of license negotiation,” asserts Todd Carpenter, NISO Executive Director. “The SERU Registry of those interested in using the SERU approach already contains over 70 publishers and content providers and 185 libraries and consortia. The expansion of the recommendations to address additional types of e-resources should interest more organizations in joining the SERU registry.”

The SERU Recommended Practice, the SERU Registry, and additional helpful resources are available from the SERU workroom webpage on the NISO web site:

3M donates standard interchange protocol (SIP) to national standards organization (NISO)

3M library systems and NISO have joined together in an effort to drive future innovation of the standard interchange protocol (SIP) as an American national standard. Originally this was developed by 3M to provide a common communication language that would drive adoption of self-service systems for libraries. SIP has become the de-facto standard for the communication between self-service devices and integrated library systems (ILS) globally.

Shortly after 3M introduced the first SelfCheck system for libraries in 1992, it quickly became evident that there was a need for a standardized communication mechanism for ILS and self-service devices. As a leader in library technology and a strong advocate for the advancement of libraries, 3M initiated development of SIP which paved the way for self-service devices in libraries and brought many new suppliers and innovations to the market.

Since the inception of SIP in 1993, 3M has continued to lead the development of updated versions, most recently with version 3.0 released in late 2011. Each update has addressed the evolving needs of libraries and has provided extensions to simplify and support automated materials handling systems, PC management systems and fine and fee payment solutions.

“We are proud of the contributions SIP has made to the library community over the years. While 3M has always sought input from the libraries, developers and interested parties in enhancing the protocol, the time is right for further development of SIP to be done in a more independent, community environment which NISO provides,” said Skip Driessen, Global Business Manager for 3M library systems.

“The maturity of the SIP protocol and its implementation track record should allow it to move quickly through the NISO standardization process,” stated Todd Carpenter, NISO Managing Director. “We anticipate that version 3.0, as it currently stands or with very minor revisions, will be adopted as a standard following a brief period of review within a NISO working group.”

Anyone interested in participating in the working group to review SIP 3.0 and prepare it for balloting as a NISO standard should contact NISO at: More information about the project, including the project proposal, can be found on the NISO web site in the SIP Workroom:

Ringgold, Bowker become first contracted ISNI registration agencies

The international standard name identifier (ISNI) is an ISO standard (ISO 27729:2012) whose scope is the identification of public identities across multiple fields of creative activities. ISNI streamlines content distribution chains, disambiguating natural, legal and even fictional parties that might otherwise be confused.

ISNI is a creation of the ISNI International Agency (ISNI-IA) founded by CISAC, the Conference of European National Librarians (represented by the Bibliothèque Nationale de France and the British Library), IFRRO, IPDA, OCLC and ProQuest. The founding members include consortia representing more than 26,000 major world libraries, 300 rights management societies and research information giants OCLC and ProQuest.

Ringgold Inc. has contracted with the ISNI International Agency to be the first ISNI (international standard name identifier) Registration Agency for institutional identification. Ringgold will incorporate ISNIs into its identify database of institutional identifiers and distribute these ISNIs without charge to Ringgold’s identify clients. Bowker, an affiliated business of ProQuest, is the first US registration agency for the new ISO-certified naming standard.

For Ringgold’s clients, this will immediately affect over 300,000 institutions worldwide. Building on several years of experience in providing institutional identification, Ringgold will be working with ISNI on the technical requirements for the addition of ISNI numbers to Ringgold’s identify database. It is anticipated that all Ringgold institutional records will have an ISNI attached to them in the latter part of 2012. During the first year of operation, clients using Ringgold’s standard identify services will receive ISNI Numbers without additional charge, to encourage them to incorporate ISNIs into their workflows and services. Organizations acquiring just ISNIs without Ringgold’s other services will be charged on a sliding scale based on the quantity of ISNIs required.

In addition, Ringgold’s free look-up service will display ISNI numbers as well as Ringgold ID numbers. The free look-up service is available at: which, after registration, enables users to search for and obtain an institutional identification number as well as basic location information and the Ringgold standard name for an institution.

ISNI has been designed as a bridge identifier between identification methods across the media industry. It provides a unique identification number for any public party, such as authors, fictional characters, musicians, rights holders, publishers, and institutions. Since 2005, Ringgold has been providing institutional identification services to publishers and intermediaries through its auditing services and identify database.

Laura Cox, VP of sales and marketing at Ringgold said: “We are delighted to have been recognised as an authority on institutional identification by ISNI and anticipate that the ability to assign ISNI numbers to institutions will open up a wide range of possibilities for everyone working in the content creation industry. The ability to map data from one source to the next gives tremendous power to the data that is generated on a day-to-day basis.”

As the first ISNI registration agency in the US, Bowker will use its experience registering international standard book numbers (ISBNs) to create a simple process for assigning ISNI’s unique 16-digit code to public identities in media content industries who are involved in the creation, production, management, and content distribution chains.

“Both Bowker and ProQuest have been deeply involved and strongly supportive of the creation of the ISNI because of its ability to help streamline all types of biographical research and produce more accurate results,” said Beat Barblan, Bowker’s Director of Identifier Services. “We’re pleased to now be able to help implement the process”.

The ISNI agency, a worldwide group of organizations that serve researchers, created the standard to make it easier to accurately connect information with public identities, both real and fictional, and with institutions. The ISNI disambiguates identities so that an author Michele Smith is not confused with a guitarist Michele Smith or a singer with the same name. It is especially practical for organizations administering rights, simplifying identification and administration of royalties.

Through Bowker individuals and institutions can now start applying for an ISNI. Once the number is assigned, Bowker shares it across the global digital information industry, enabling research organizations to apply it to content by or about that party held in their databases. Users tapping into any of the organizations that use ISNIs will need only a name and just enough background data to zero in on the correct identity. Then, the ISNI will take over, connecting all the appropriate public information. Users can also start with an ISNI and find the identity and data that matches it.

Use ISNI’s free lookup interface at:

Ringgold Inc.:


IFLA functional requirements namespaces published

Patricia Riva of the Bibliothèque et Archives nationales du Québec recently announced that namespaces for the functional requirements (FR) family of bibliographic metadata models have been published in resource description framework (RDF), the basis of the semantic web. The models include functional requirements for bibliographic records (FRBR), functional requirements for authority data (FRAD), and functional requirements for subject authority data (FRSAD).

The FR element set vocabularies include RDF classes and properties corresponding to FR entities, attributes, and relationships. Each class and property has a uniform resource identifier (URI) for use in semantic web data triples.

A full de-referencing service is available for each URI. When used in an ordinary web browser, the URI displays HTML pages with human-readable information about the element or concept. When used in a semantic browser, the URI retrieves machine-readable information in RDF/XML format. It is also possible to retrieve this format using an ordinary web browser.

For example, the URI for the FRBR entity-relationship (FRBRer) model’s Group 1 entity “Work” is To retrieve the RDF/XML information in a normal web browser, use; that is, add the file extension “.rdf” to the URI.

Another example is the FRSAD attribute “has appellation”: the URI of the corresponding RDF property is and the RDF/XML file can be retrieved in a normal browser using the URL

The element set for a specific FR model can be accessed as follows:

These element sets reflect the current published models; however, as the FR family is in the process of being consolidated, this may result in the eventual deprecation of some URIs.

The namespaces can be accessed at:

The FR namespaces are maintained and accessed using the Open Metadata Registry:

PREMIS version 2.2 released

The PREMIS editorial committee is pleased to announce the release of PREMIS version 2.2. Changes in this version are a result of requests to amplify rights information and include the following changes.

Rights entity: changes to data dictionary and schema:

  • Addition of copyrightDocumentationIdentifier, licenseDocumentationIdentifier and statuteDocumentationIdentifier (with type, value and role) to allow for linking to documentation supporting rights information for copyright, license or statute. Note that licenseIdentifier was in version 2.1 with the same purpose; it remains in this version to be backwards compatible, but use of licenseDocumentionIdentifier is preferred. The documentation identifiers all have a role to specify the purpose of the documentation and to distinguish it in cases where there are more than one.

  • Addition of otherRightsInformation to allow for rights statements that have a different basis than copyright, license or statute, e.g. institutional policy. It has the following subunits: otherRightsDocumentationIdentifier, otherRightsBasis, otherRightsApplicableDates and otherRightsNote

  • Addition of applicable dates to copyrightInformation, licenseInformation and statuteInformation

  • Addition of termOfRestriction to allow expressing restrictions, e.g. embargos. The previous version only had termOfGrant.

Miscellaneous schema only changes include:

  • Country code definition added to recommend use of a standard country code for copyrightJurisdiction and statuteJurisdiction

  • Addition of the pattern “OPEN” to be used for dates in the list of patterns allowed by EDTF (extended date time format)

These changes will be incorporated into the PREMIS data dictionary shortly.

The new schema is at:

Changes in the data dictionary are documented at:

OCLC adds linked data to

OCLC is taking the first step toward adding linked data to WorldCat by appending descriptive mark-up to pages. now offers the largest set of linked bibliographic data on the web. With the addition of mark-up to all book, journal and other bibliographic resources in, the entire publicly available version of WorldCat is now available for use by intelligent web crawlers, like Google and Bing, that can make use of this metadata in search indexes and other applications.

Commercial developers that rely on web-based services have been exploring ways to exploit the potential of linked data. The initiative – launched in 2011 by Google, Bing and Yahoo! and later joined by Yandex – provides a core vocabulary for markup that helps search engines and other web crawlers more directly make use of the underlying data that powers many online services.

OCLC is working with the community to develop and add a set of vocabulary extensions to WorldCat data. and library specific extensions will provide a valuable two-way bridge between the library community and the consumer web. is working with a number of other industries to provide similar sets of extensions for other specific use cases.

The opportunities that linked data provide to the global library community are in line with OCLC’s core strategy of collaboratively building Webscale with libraries. Adding linked data to WorldCat records makes those records more useful – especially to search engines, developers and services on the wider web, beyond the library community. This will make it easier for search engines to connect non-library organizations to library data.

“ introduces an important new standard,” said Richard Wallis, OCLC Technology Evangelist. “Making library information compatible with the rich data sources now being published widely on the web will establish libraries as a major hub in the linked data universe. This enhancement demonstrates the WorldShare Platform vision by exposing rich bibliographic and authority data on behalf of OCLC member libraries”.

WorldCat has been built by thousands of member libraries over the last four decades and is the world’s largest online registry of library collections. OCLC will continue to engage the library community and the larger developer communities to research, discuss and inform the progression of linked data projects on behalf of member libraries.

“Libraries generate, maintain and improve an enormous amount of high-quality data that is valuable well beyond traditional library boundaries,” said Eric Miller, President of Zepheira, a professional services company that promotes the web as a platform to manage information, and is assisting OCLC with linked data strategy. “By operating as a kind of switchboard to and from other data-driven resources, WorldCat data can better connect students, scholars and businesspeople to library resources”.

OCLC sees as a timely and significant development toward linked data technology adoption that will provide recognizable benefits for libraries. “OCLC Research has been a lead participant in putting semantic structure in the web for many years,” according to Jeff Young, OCLC Software Architect. “ gives us a search engine-friendly vocabulary to describe our complex data environment. It conveniently allows various communities to join authoritative sources on the web, such as Dewey, VIAF and FAST headings, using the same structures”.

Further demonstrating its role in providing linked library data, OCLC has recently announced that the full set of DDC 23 – more than 23,000 assignable numbers and captions in English – is now available as linked data.

OCLC is committed to the stability and improved functionality of linked bibliographic data. It is likely that such markup may evolve over the coming months as the community develops a common understanding. This release should be considered experimental and subject to change. This linked data release of is made available by OCLC under the open data commons attribution license.

Search WorldCat on the web site:

OCLC and EBSCO develop partnership to offer increased options for discovery

OCLC and EBSCO publishing (EBSCO) have signed an agreement to make EBSCO discovery service™ (EDS) interoperable with OCLC WorldShare management services, enabling libraries to use EDS as their discovery layer and WorldShare management services as their library management system. Additionally, the companies are investigating working toward an integrated solution allowing WorldCat local users who also subscribe to EDS to search and retrieve results from EDS within the WorldCat local service.

Libraries using the integrated EDS-WMS solution will be able to perform cataloging, acquisitions, license management and circulation in OCLC’s next generation cloud-based management system, while providing their patrons with the EDS discovery service as a user front end. OCLC and EBSCO are working together to integrate such key WMS functions into EDS as patron identity management, item availability, circulation and system configuration. Because libraries using this integrated solution will catalog in WorldCat, users of EDS in this configuration will have the ability to access all of WorldCat.

“The EBSCO-OCLC approach provides libraries with flexibility and choice,” said Jay Jordan, OCLC President and CEO. “It enables them to configure their discovery, content and management services according to their needs. It reduces duplication of effort at the same time that it vastly increases the availability of library resources. It will be a model for future collaboration on the WorldShare Platform across the library ecosystem.”

“Libraries’ situations vary so greatly that the over-arching need is for options and customization,” said Tim Collins, President of EBSCO Publishing. “This partnership allows libraries using WorldCat Local and OCLC WorldShare Management Services to customize the approach to discovery and collection utilization for their institutions, while exposing an even broader array of content available via EDS.”

“This affiliation between EDS and WMS illustrates the value of building partnerships in service to libraries using the OCLC WorldShare Platform,” said Chip Nilges, OCLC Vice President, Business Development. This strategic partnership reflects OCLC’s commitment to provide broader access to the Platform environment and WorldCat data, outside of OCLC developed applications. “Partners like EBSCO can take advantage of the same infrastructure that OCLC uses to build and maintain its own services, providing libraries with an extended range of options that take advantage of the same core data.”

EBSCO discovery service:

OCLC WorldShare management services:

Bowker stacks: Books In Print, resources for college libraries go mobile with new app

Bowker, an affiliated business of ProQuest, has launched its new stacks™ app that gives subscribers of Books In Print® and resources for college libraries (RCL) the ability to scan a barcode or enter an ISBN, access book records, and add titles to lists from their iPhones® and iPads®. Using stacks, book buyers in a store, office or at a trade show can simply scan and save titles from Books In Print on the spot. Librarians can scan titles from their stacks, add to lists for weeding, etc. eliminating the need to cart books back to a desk for evaluation.

“Stacks is a tremendous time saver,” said Sharon Lubrano, ProQuest Vice-President and General Manager of Bowker. “We’re enabling users to connect with and use two pivotal resources no matter where they are – a tradeshow, a bookstore, a meeting with colleagues, in their library stacks, anywhere”.

Bowker Stacks streamlines library and book-buying processes and saves money through the elimination of scanners. Users simply scan a book’s barcode or enter in the ISBN, then link to the metadata from Books In Print to add titles to lists for later review. RCL users can work right at their shelves, scanning titles and verifying on their iPhones and iPads whether they are part of the list that peer librarians consider essential for two- and four-year collections. If it is not in RCL, it is not core and subject to weeding.

Bowker Stacks can be downloaded from iTunes®. A “How To” is included in the app for simple linking to Books in Print or RCL accounts.

Stacks is one of a variety of new mobile technologies available from ProQuest and its businesses. Earlier in June 2012, ebrary announced a new Android™ app that will be freely available on Google Play this summer. The company also has apps the iPad®, iPhone®, and iPod touch®. And, a new mobile optimized interface is being beta tested for the ProQuest® research environment.

Bowker Stacks:

Research universities and the future of America: report from the national academies

The committee on research universities of the national academies in Washington, DC has released its report, Research universities and the future of America: ten breakthrough actions vital to our nation’s prosperity and security.

This report examines the health and competitiveness of the nation’s research universities and their strong partnership with government and industry that is critical to the nation’s prosperity and national goals. The report responds to a request from Congress for:

[…] the top ten actions that Congress, the federal government, state governments, research universities, and others could take to assure the ability of the American research university to maintain the excellence in research and doctoral education needed to help the United States compete, prosper, and achieve national goals for health, energy, the environment, and security in the global community of the 21st century.

Research universities and the future of America presents critically important strategies for ensuring that our nation’s research universities contribute strongly to America’s prosperity, security, and national goals. Widely considered the best in the world, the nation’s research universities today confront significant financial pressures, important advances in technology, a changing demographic landscape, and increased international competition. This report provides a course of action for ensuring our universities continue to produce the knowledge, ideas, and talent the USA needs to be a global leader in the twenty-frst century.

Research universities and the future of America focuses on strengthening and expanding the partnership among universities and government, business, and philanthropy that has been central to American prosperity and security. The report focuses on the top ten actions that Congress, the federal government, state governments, research universities, and others could take to strengthen the research and education missions of our research universities, their relationships with other parts of the national research enterprise, and their ability to transfer new knowledge and ideas to those who productively use them in our society and economy.

This report examines trends in university finance, prospects for improving university operations, opportunities for deploying technology, and improvement in the regulation of higher education institutions. It also explores ways to improve pathways to graduate education, take advantage of opportunities to increase student diversity, and realign doctoral education for the careers new doctorates will follow. Research Universities and the future of America is an important resource for policy makers on the federal and state levels, university administrators, philanthropic organizations, faculty, technology transfer specialists, libraries, and researchers.

Information on the report can be found at:

Read the report online at:

New book capture cradle for rapid capture of bound & loose document collections

The digital transitions division of cultural heritage debuted their new DT RGC180 capture cradle, the latest integration of book capture and reprographic technology, at the June 2012 ALA Annual Conference in Anaheim, CA. Designed to produce preservation grade images at the fastest rate of capture, the DT RGC180 is the optimum digitization solution for the rapid capture of rare, bound and loose materials.

The DT RGC180 features a built-in pneumatic 180° dual platen book cradle that automatically adjusts to the thickness of bound collections. The system is designed to bring printed materials to optimal focus and accommodates books up to 25×35 in. with up to 4 in. bindings. The book cradle platens are self-adjusting platforms that utilize dual pneumatic pistons for raising and lowering. The platforms gently push the books against the glass plate for image capture and can also leave documents partially open when the binding is too fragile and cannot be completely flattened. The RGC180 is operated by foot pedals and can be fine-tuned to protect the widest range of materials.

“The DT RGC180 Capture Cradle was originally designed and built for the National Archives Records Administration with the intention of providing preservation-class images in a highly efficient workflow,” said division of cultural heritage Director, Peter Siegel. “This industry requires solutions that not only output the finest image quality, but are also durable, easy to use, and won’t become obsolete in a few years. We answered these demands by incorporating a modular design so that its components can be upgraded as technology advances. To further increase versatility, we include a 30×40 in. copyboard and continue to design accessory solutions such as the DT Film Scanning Kit so a wider variety of materials can be digitized utilizing the same system.”

DT RGC180 key features:

  • 180° anti-reflective glass platen enables the digitization of up to of 25×35 in. books with up to 4 in. bindings.

  • Two pneumatic platforms that automatically adjust to your book for optimal focus.

  • System can be fine-tuned for the safety of the widest range of materials.

  • The optimum solution for the digitization of books, foldouts, works on paper, serials including newspapers, loose manuscripts, photographs, drawings and more.

  • Delivers preservation grade images in the following formats and color spaces: TIFFs, JPEGs, PDFs in RGB, grayscale and CMYK modes. Open source raw and DNG also supported.

  • Compliant with the most stringent federal agency digital guidelines initiative (FADGI).

The DT film scanning kit is an add-on fixture for use with the new DT RGC180 capture cradle or DT RG3040 reprographic system. This system answers the industry’s demand for a faster and higher quality scanning solution that will digitize photographic films, plates and translucent materials. With an image capture every two seconds, the DT film scanning kit is over 200 times faster than flatbed or drum scanners. This system incorporates a cooled transilluminator to digitize all types of photographic plates as well as negative and positive film from 35 mm up to 11×17 in. and includes all the necessary film pattern holders.

For more technical information concerning the DT RGC180 capture cradle or DT film scanning kit, visit the web site at:

DT RGC180 capture cradle: