New & Noteworthy

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 12 October 2012

155

Citation

(2012), "New & Noteworthy", Library Hi Tech News, Vol. 29 No. 8. https://doi.org/10.1108/lhtn.2012.23929haa.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2012, Emerald Group Publishing Limited


New & Noteworthy

Article Type: New & Noteworthy From: Library Hi Tech News, Volume 29, Issue 8

Demystifying born digital: OCLC Research Library partnership

OCLC Research’s “Demystifying born digital” project focuses on enhancing the effective management of born-digital materials as they intersect with special collections and archives practices in research libraries.

Building on Ricky Erway’s essay, defining “born digital”, the outcomes of this project will be a series of brief reports on the following issues:

  • Technical baby steps for those who have acquired born-digital materials on physical media but have not yet begun dealing with them due to lack of expertise, time, fear, money, etc. This is intended to help curators start getting materials under control without doing any harm to the originals.

  • A call for a network of hubs to enable cost-effective outsourcing of the transfer of various types of physical media, particularly obsolete formats. We seek to reduce the need for everyone to figure everything out on their own, and instead set up a network of expert sites that have the necessary equipment and experience.

  • The skills and experience that archivists bring to the management of any type of born-digital content that might find its way into a research library, whether or not it would be considered “archives” or “special collections.” This will make sure that archival approaches are neither ignored nor reinvented but rather are applied and adapted by non-archivists when managing born-digital content.

  • Working with donors of born-digital materials. This will help archivists et al. do as much as possible up front to ensure that rights, restrictions, and technical issues are addressed prior to acquisition.

  • Closely related to the skills and experience piece, thoughts on the relationship between “born-digital” and “special collections.” Impetus for this is the general confusion about which born-digital contents “belong in” special collections.

  • A relatively short annotated list of the best resources, guidelines, and software for managing born-digital collections.

As revealed by the Survey on Special Collections and Archives project (published as Taking Our Pulse: The OCLC Research Survey of Special Collections and Archives), management of born-digital materials in academic and research libraries remains in its infancy. Among the indicators: one-third of survey respondents cite it as one of their three greatest special collections challenges, less than half have assigned responsibility for this activity to an organizational unit, only a third of those who have collected any material know how much they have, and born-digital management is the number one area in which education and training are needed.

Half of the gigabytes of material reported are held by only two responding institutions; more than 90 percent is held by 13. At the same time, 80 percent have collected at least one born-digital format in special collections. It is plausible to surmise that much collecting is ad hoc and reactive. The most commonly-stated impediment to born-digital management was lack of funding, despite the fact that we know very little about the relevant costs. Lack of expertise was the second most common impediment.

Polar-opposite opinions can be heard across the research library spectrum regarding the intersections between “special collections” and “born digital”: some believe that born digital is entirely the responsibility of special collections, others that there is no role whatsoever for special collections. A nuanced approach is necessary.

The goal of this activity is to characterize the skills and expertise that archivists and special collections librarians bring to the table in the born-digital context, establish the relevance of those skills with regard to particular types of born-digital material, and provide a very basic roadmap for beginning to implement management of born-digital archival materials. Research libraries will thereby begin to gain the confidence necessary for taking initial steps to launch a born-digital management program that can be scaled up over time.

The essay, defining “born digital” addresses aspects of this in presenting a brief taxonomy of types of material ranging from data sets and web sites to digital manuscripts and photographs, among others. Building on this work, the research partnership will explore the array of skills and expertise held by special collections librarians and archivists that are crucial to effective management of born-digital materials and how those skills pertain to the various types of digital material.

Further, the scope of publications and research relating to born-digital archival materials (generally referred to in the archival literature as “electronic records”) is vast, but most work focuses on very specific problems and solutions, many of them dauntingly complex. Sophisticated understanding of requirements and implementations exist in far more governmental and corporate archives than in academic institutions; the latter must quickly play catch-up. No simple roadmap exists to help special collections and archives in research libraries take the first steps to implement born-digital management: we will provide one.

OCLC Research staff will work with an informal group of advisors, each of whom has particular experience and perspective on these issues. Initial conversations have confirmed that these colleagues have much to offer in helping define and elucidate the most relevant points.

The project has issued two new reports:

“You’ve Got to Walk Before You Can Run: First Steps for Managing Born-Digital Content Received on Physical Media”, is geared to those tasked with gaining preliminary control over the digital media in an archives’ collections, including those who do not know where to begin in managing born-digital materials.

Written by Senior Program Officer Ricky Erway, “You’ve Got to Walk Before You Can Run” errs on the side of simplicity and describes what is truly necessary to start managing born-digital content on physical media. It presents a list of the basic steps without expanding on archival theory or the use of particular software tools. It does not assume that policies are in place or that those performing the tasks are familiar with traditional archival practices, nor does it assume that significant IT support is available. In total, 18 well-respected advisors weighed in on the guidance, ensuring that it was not just simple, but authoritative.

Read the report, “You’ve Got to Walk Before You Can Run: First Steps for Managing Born-Digital Content Received on Physical Media”: http://oclc.org/research/publications/library/2012/2012-06r.html

Watch the author’s video, “You’ve Got to Walk Before You Can Run: First Steps for Managing Born-Digital Content Received on Physical Media”: http://youtu.be/Mu_TC35u8cw

The second report, “Swatting the Long Tail of Digital Media: A Call for Collaboration”, urges a collaborative approach for conversion of content on various types of digital media. Written by Senior Program Officer Ricky Erway, “Swatting the Long Tail of Digital Media”, is intended for managers who are making decisions on where to invest their born-digital time and money. It should help them understand that any expectations that local staff will be able to handle everything are probably impractical. We hope it will also help archivists (and others) in the trenches breathe a sigh of relief to think that perhaps they will not have to deal with an array of obsolete media all on their own.

As with the first report, “You’ve Got to Walk Before You Can Run: First Steps for Managing Born-Digital Content Received on Physical Media”, this report refers only to born-digital material on physical media.

Read the report, “Swatting the Long Tail of Digital Media: A Call for Collaboration”: http://oclc.org/research/publications/library/2012/2012-06r.html

Watch the author’s video about the report, “Swatting the Long Tail of Digital Media: A Call for Collaboration (2:02)”: http://youtu.be/PklOIz5FyBE

Learn more about the OCLC Research activity Demystifying Born Digital: www.oclc.org/research/activities/borndigital.html

Kuali OLE and JISC collaborate to create global open knowledgebase

Kuali OLE, one of the largest academic library software collaborations in the USA, and JISC, the UK’s expert on digital technologies for education and research, have announced a collaboration that will make data about e-resources – such as publication and licensing information – more easily available.

Together, Kuali OLE and JISC will develop an international open data repository that will give academic libraries a broader view of subscribed resources.

The effort, known as the GOKb project, is funded in part by a $499,000 grant from The Andrew W. Mellon Foundation. North Carolina State University will serve as lead institution for the project.

GOKb will be an open, community-based, international data repository that will provide libraries with publication information about electronic resources. This information will support libraries in providing efficient and effective services to their users and ensure that critical electronic collections are available to their students and researchers.

“This Kuali OLE – JISC partnership adds momentum to our efforts to create an open library system and offers benefits to all participants. We are pleased at the way our projects have come together toward a common goal, and look forward to sharing the results widely,” said Deborah Jakubs, University Librarian and Vice Provost for Library Affairs at Duke University and Co-chair of the Kuali OLE board.

Robert H. McDonald, Executive Director of Kuali OLE, says, “With the start-up of the GOKb Project, Kuali OLE as an organization is showcasing the strengths and opportunities that come from deep collaborative engagements with our peer academic libraries both in the US and in the UK. The role for libraries in collaboration around electronic content can’t be dismissed. Libraries need better supply-chain options for our electronic content management workflows and the GOKb Project will provide solutions.”

“Nowhere are the advantages and possibilities of data better understood and more keenly felt than in academic libraries,” says Rachel Bruce, JISC Innovation Director. “Data underpins the services and systems that libraries provide to their students and researchers.”

The GOKb cloud service will provide data for “subscribed resources” from a higher education perspective. It will include data such as publication information, related organizations, and model licences, and will be accessible across all US and UK academic libraries.

Many of the concerns libraries have in the management of electronic resources are the same across the world. Indeed, there are a number of projects, such as the Kuali OLE (Open library Environment) in the USA and the Knowledge Base+ service in the UK that are exploring community-based solutions.

GOKb home: http://gokb.org/

Linked data for libraries, archives, and museums: new issue of Information Standards Quarterly

The National Information Standards Organization (NISO) has announced the publication of a special themed issue of the Information Standards Quarterly (ISQ) magazine on “linked data for libraries, archives, and museums”. ISQ Guest Content Editor, Corey Harper, Metadata Services Librarian, New York University, has pulled together a broad range of perspectives on what is happening today with linked data in cultural institutions. He states in his introductory letter, “As the Linked Data Web continues to expand, significant challenges remain around integrating such diverse data sources. As the variance of the data becomes increasingly clear, there is an emerging need for an infrastructure to manage the diverse vocabularies used throughout the Web-wide network of distributed metadata. Development and change in this area has been rapidly increasing; this is particularly exciting, as it gives a broad overview on the scope and breadth of developments happening in the world of Linked Open Data for Libraries, Archives, and Museums.”

The feature article by Gordon Dunsire, Corey Harper, Diane Hillmann, and Jon Phipps on “Linked Data Vocabulary Management” describes the shift in popular approaches to large-scale metadata management and interoperability to the increasing use of the Resource Description Framework to link bibliographic data into the larger web community. The authors also identify areas where best practices and standards are needed to ensure a common and effective linked data vocabulary infrastructure.

Four “in practice” articles illustrate the growth in the implementation of linked data in the cultural sector. Jane Stevenson in “Linking Lives” describes the work to enable structured and linked data from the Archives Hub in the UK. In “Joining the Linked Data Cloud in a Cost-Effective Manner”, Seth van Hooland, Ruben Verborgh, and Rik Van de Walle show how general purpose interactive data transformation tools, such as Google Refine, can be used to efficiently perform the necessary task of data cleaning and reconciliation that precedes the opening up of linked data. Ted Fons, Jeff Penka, and Richard Wallis discuss “OCLC’s Linked Data Initiative” and the use of Schema.org in WorldCat to make library data relevant on the web. In “Europeana: Moving to Linked Open Data”, Antoine Isaac, Robina Clayphan, and Bernhard Haslhofer explain how the metadata for over 23 million objects are being converted to an RDF-based linked data model in the European Union’s flagship digital cultural heritage initiative.

Jon Voss provides a status on “Linked Open Data for Libraries, Archives, and Museums (LODLAM) State of Affairs” and the annual summit to advance this work. Thomas Elliott, Sebastian Heath, John Muccigrosso “Report on the Linked Ancient World Data Institute”, a workshop to further the availability of linked open data to create reusable digital resources with the classical studies disciplines.

Kevin Fordwraps up the contributed articles with a standard spotlight article on “LC’s Bibliographic Framework Initiative and the Attractiveness of Linked Data”. This Library of Congress-led community effort aims to transition from MARC 21 to a linked data model.

“The move to a linked data model in libraries and other cultural institutions represents one of the most profound changes that our community is confronting,” stated Todd Carpenter, NISO Executive Director. “While it completely alters the way we have always described and cataloged bibliographic information, it offers tremendous opportunities for making this data accessible and usable in the larger, global web community. This special issue of ISQ demonstrates the great strides that libraries, archives, and museums have already made in this arena and illustrates the future world that awaits us.”

“Institutions that are just starting to dip their toes in the waters of linked data will find much in this issue of ISQ to inspire and challenge them,” said Cynthia Hodgson, ISQ Managing Editor. “Those further along the implementation path can learn how others have addressed the common issues encountered in making the transition to a linked data model.”

ISQ is available in open access in electronic format on the NISO web site. Both the entire issue and individual articles may be freely downloaded. Print copies are available by subscription and as print on demand.

For more information and to access the free electronic version, visit: www.niso.org/publications/isq/2012/

Europeana’s huge cultural dataset now open for re-use

Opportunities for apps developers, designers and other digital innovators will be boosted today as the digital portal Europeana opens up its dataset of over 20 million cultural objects for free re-use.

The massive dataset is the descriptive information about Europe’s digitised treasures. For the first time, the metadata is released under the Creative Commons CC0 Public Domain Dedication, meaning that anyone can use the data for any purpose – creative, educational, commercial – with no restrictions. This release, which is by far the largest one-time dedication of cultural data to the public domain using CC0, offers a new boost to the digital economy, providing electronic entrepreneurs with opportunities to create innovative apps and games for tablets and smartphones and to create new web services and portals.

Europeana’s move to CC0 is a step change in open data access. Releasing data from across the memory organisations of every EU country sets an important new international precedent, a decisive move away from the world of closed and controlled data.

Importantly, the change represents a valuable contribution to the European Commission’s agenda to drive growth through digital innovation. Online open data is a core resource which can fuel enterprise and create opportunities for millions of Europeans working in Europe’s cultural and creative industries. The sector represents 3.3 percent of EU GDP and is worth over €150 billion in exports.

Welcoming the announcement, Neelie Kroes, Vice-President of the European Commission with responsibility for the Digital Agenda for Europe, said: “Open data is such a powerful idea, and Europeana is such a cultural asset, that only good things can result from the marriage of the two. People often speak about closing the digital divide and opening up culture to new audiences but very few can claim such a big contribution to those efforts as Europeana’s shift to creative commons.”

Applying the CC0 waiver also means that Europeana’s metadata can now be used in Linked Open Data developments. This holds the potential to bring together data from Europe’s great libraries, museums and archives with data from other sectors such as tourism and broadcasting. The result could be a powerful knowledge generating engine for the twenty-first century.

Jill Cousins, Executive Director of Europeana said: “This move is a significant step forward for open data and an important cultural shift for the network of museums, libraries and galleries who have created Europeana. This is the world’s premier cultural dataset, and the decision to open it up for re-use is bold and forward looking – it recognises the important potential for innovation that access to digital data provides. This development means that Europe now sets the worldwide standard for the sector.”

The data is available via an API – Application Programming Interface, which anyone can register for.

Register for an API key for Europeana: http://pro.europeana.eu/web/guest/registration

Europeana: www.europeana.eu/

Online access to audiovisual heritage: EUscreen publishes second status report

The EUscreen project has published 30,000 television items online in an act to make historical audiovisual content widely accessible. EUscreen started in October 2009 as a three-year project funded by the European Commission’s eContentplus programme. A beta version of the portal was launched in 2011 and is also directly connected to Europeana. EUscreen is co-ordinated by University of Utrecht and its consortium consists of 28 partners and ten associate partners (comprising audiovisual archives, research institutions, technology providers and Europeana) from 20 different European countries.

EUscreen has announced its second status report, online access to audiovisual heritage. This document is a follow-up on the first EUscreen status report, published one year ago. In three chapters, the report gives an overview of technological developments bearing an influence on publishing and making accessible historical footage. The report discusses online heritage practices within Europe and beyond. In a field that faces constant renewal, overhaul and additional challenges, the report means to take stock of the status of the online audiovisual heritage field. This allows the EUscreen project to measure its own strategies and technological development and allows the participating archives, broadcasters and the broader GLAM community to come up with solutions for providing access that cater to users’ needs and environments.

Each of the three chapters of the report focuses on a different aspect of online access. Through this structure, authors Erwin Verbruggen and Johan Oomen successively discuss three main trends regarding access, namely:

  1. 1.

    use and re-use today;

  2. 2.

    trends towards a cultural commons; and

  3. 3.

    fundamental research in the area of audiovisual content.

The first chapter gives an overview of major developments, including access provision and use of content by the creative industries. The second chapter explores the topic of (sustainable) re-use of audiovisual sources as a cultural and explorative practice leading towards more open and participatory archives. Finally, the third chapter discusses European research topics that are currently ongoing in areas connected to audiovisual heritage.

Download the Second EUscreen Status Report (PDF): http://bit.ly/OVLcZV

Download the First EUscreen status report (PDF): http://bit.ly/MF1hsL

Information about the final EUscreen conference: https://euscreen2012.eventbrite.com/

For the EUscreen portal, visit: www.euscreen.eu/

California Digital Library announces release of XTF version 3.1

The California Digital Library (CDL) has announced the release of version 3.1 of the eXtensible Text Framework (XTF), an open source, highly flexible software application that supports the search, browse and display of heterogeneous digital content. XTF provides efficient and practical methods for creating customized end-user interfaces for distinct digital content collections and is used by institutions worldwide.

Major features in the 3.1 release include:

  1. 1.

    Improved schema handling for EAD finding aids. In addition to EAD 2002 DTD, XTF now provides support for search and display of:

    • EAD 2002 schema and EAD 2002 RelaxNG finding aids; and

    • output from archivists’ Toolkit and Archon.

  2. 2.

    Better OAI 2.0 conformance.

  3. 3.

    Dynamic site maps to support optimal search engine indexing.

See the 3.1 change log for further details.

XTF is a combination of Java and XSLT 2.0 that indexes, queries, and displays digital objects and is based on open source software (e.g. Lucene and Saxon). XTF can be downloaded from the XTF web site r from the XTF project page on SourceForge, where the source code can also be found.

The XTF web site also provides a self-guided tutorial and a sample of the default installation, demonstrating the capabilities of the tool out-of-the-box. Both of these resources provide a quick view of the capabilities of XTF prior to download.

Offering a suite of customizable features that support diverse intellectual access to content, XTF interfaces can be designed to support the distinct tools and presentations that are useful and meaningful to specific audiences. In addition, XTF offers the following core benefits to developers:

  1. 1.

    Easy to deploy: drops directly into a Java application server such as Tomcat or Resin; has been tested on Solaris, Mac, Linux, and Windows operating systems.

  2. 2.

    Easy to configure: can create indexes on any XML element or attribute; entire presentation layer is customizable via XSLT.

  3. 3.

    Robust: optimized to perform well on large documents (e.g. a single text that exceeds 10MB of encoded text); scales to perform well on collections of millions of documents; provides full Unicode support.

  4. 4.

    Extensible:

    • works well with a variety of authentication systems (e.g. IP address lists, LDAP, Shibboleth);

    • provides an interface for external data lookups to support thesaurus-based term expansion, recommender systems, etc.;

    • can power other digital library services (e.g. XTF contains an OAI-PMH data provider that allows others to harvest metadata, and an SRU interface that exposes searches to federated search engines); and

    • can be deployed as separate, modular pieces of a third-party system (e.g. the module that displays snippets of matching text).

  5. 5.

    Powerful for the end-user:

    • spell checking of queries;

    • faceted displays for browsing;

    • dynamically updated browse lists; and

    • session-based bookbags.

These basic features can be tuned and modified. For instance, the same bookbag feature that allows users to store links to entire books can also store links to citable elements of an object, such as a note or other reference.

Examples of XTF-based applications include:

  • eScholarship (www.escholarship.org), the University of California’s open access scholarly publishing and research platform.

  • Mark Twain Project Online (www.marktwainproject.org), developed by the Mark Twain Papers Project, the CDL and the University of California Press.

  • Calisphere (http://calisphere.universityofcalifornia.edu/), a curated collection of primary sources keyed to the curriculum standards of California’s K-12 community, developed by the CDL.

The XTF homepage: http://xtf.cdlib.org

XTF 3.1 change log: http://xtf.cdlib.org/documentation/changelog/#3.1

XTF Project page on SourceForge: http://sourceforge.net/projects/xtf/

SiteStory transactional archive solution open source release

Herbert Van de Sompel, Los Alamos National Laboratory, Research Library, has announced the open source release of the SiteStory transactional web archiving solution. Transactional archiving consists of selectively capturing and storing transactions that take place between a web client (browser) and a web server. The SiteStory solution is compatible with the Memento “Time Travel for the Web” framework and its current implementation can be used to archive Apache web servers.

Most existing web archives recurrently send out bots to crawl the content of web servers. This results in observations of a server’s content at the time of crawling. Since the crawling frequency is generally not aligned with the change rate of a server’s resources, this approach is typically not able to capture all versions of a server’s resource. The resulting archive may provide an acceptable overview of a server’s evolution over time, but it will not provide an accurate representation of the server’s entire history. A SiteStory Web Archive, however, captures every version of a resource as it is being requested by a browser. The resulting archive is effectively representative of a server’s entire history, although versions of resources that are never requested by a browser will also never be archived.

The SiteStory Web Archive provides the following opportunities:

  • Dynamic archiving of your Apache web content server.

  • Archive accessible via the Memento protocol.

  • Archival data can be offloaded to WARC files.

  • Archival data can be uploaded into an instance of the Internet Archive’s Wayback software.

The development of SiteStory was a significant endeavor and acknowledgment of the following contributors is appropriate:

  • Luydmilla Balakireva of the Prototyping Team of the LANL Research Library for the actual SiteStory development.

  • Robert Sanderson, Harihar Shankar, Martin Klein, and Herbert Van de Sompel of the Prototyping Team of the LANL Research Library for architectural guidance.

  • Michael L. Nelson and Justin Brunelle of the Web Science and Digital Library Research Group at Old Dominion University for input and early testing.

  • Patrick Hochstenbach of the Ghent University Library and Dirk Roorda of DANS for early testing.

  • The Library of Congress for supporting the development.

Information about SiteStory is available at: http://mementoweb.github.com/SiteStory/

The SiteStory code is accessible via: https://github.com/mementoweb/SiteStory

A Google group dedicated to discussions pertaining to SiteStory is at https://groups.google.com/forum/#!forum/sitestory. Please give the software a try and share feedback on the list.

Information about transactional web archiving is available via http://en.wikipedia.org/wiki/Web_archiving#Transactional_archiving and http://ausweb.scu.edu.au/aw03/papers/fitch/

NISO publishes Journal Article Tag Suite (JATS) Standard

The NISO has announced the publication of a new American National Standard, “JATS”, ANSI/NISO Z39.96-2012. JATS provides a common XML format in which publishers and archives can exchange journal content by preserving the intellectual content of journals independent of the form in which that content was originally delivered. In addition to the element and attribute descriptions, three journal article tag sets (the Archiving and Interchange Tag Set, the Journal Publishing Tag Set, and the Article Authoring Tag Set) are part of the standard. While designed to describe the textual and graphical content of journal articles, it can also be used for some other materials, such as letters, editorials, and book and product reviews.

“Although this is the first version of JATS as an American National Standard,” stated Nettie Lagace, NISO Associate Director for Programs, “the specification has a long history as the ‘National Library of Medicine (NLM) Journal Archiving and Interchange Tag Suite’, commonly referred to as the NLM DTDs. Those DTDs were based on an article model that was used in the National Center for Biotechnology Information (NCBI)/NLM PubMed Central project to archive life science journals. The original PubMed Central article model was expanded in scope with support from Harvard University Libraries and The Andrew W. Mellon Foundation, in collaboration with Inera, Inc. and Mulberry Technologies, Inc., resulting in 2003 in the full ‘NLM Journal Archiving and Interchange Tag Suite’. The Tag Suite had reached version 3.0 prior to initiation of the NISO standardization process.”

“Since its initial release, the Archiving and Interchange Tag Suite has been widely popular,” said B. Tommie Usdin, President of Mulberry Technologies, Inc. and Co-chair of the NISO JATS Working Group. “The format is being used to tag thousands of journals worldwide and is used for the journal archives at PubMed Central and Portico and by the online publisher HighWire Press. The Library of Congress and the British Library have announced their intention to use these models for archiving electronic content.”

“Taking JATS through the NISO standardization process will bring awareness of the Tag Suite to a larger and more varied audience,” explained Jeffrey Beck, NCBI Technical Information Specialist at the NLM and Co-chair of the NISO JATS Working Group. “We expect this wider audience will find uses for the Tag Suite in new applications, beyond its traditional uses in journal publishing and archiving.”

“We are pleased that the NLM project team brought this valuable standard to NISO for wider dissemination,” stated Todd Carpenter, NISO Executive Director. “We will be supporting a standing committee to continuously update the standard and NLM will continue to host the user documentation and schemas that support the standard.”

The JATS standard is available as both an online XML document and a freely downloadable PDF from the NISO web site: www.niso.org/workrooms/journalmarkup

Supporting documentation and schemas in DTD, RELAX NG, and W3C Schema formats are available at: http://jats.nlm.nih.gov/

Music discovery requirements now available from Music Library Association

The music discovery requirements document is now available on the Music Library Association’s web site. The document was created under the auspices of Music Library Association’s Emerging Technologies and Services Committee and officially approved by the Music Library Association’s Board of Directors.

Music materials, particularly scores and recordings, pose unique demands that must be considered for successful discovery. Some of the unique needs posed by music materials can be addressed simply by ensuring that needed fields are appropriately displayed and indexed in discovery interfaces. Other problems are more difficult to solve. This document discusses the issues and when possible gives concrete recommendations for discovery interfaces. Given that most libraries will be dealing with large bodies of legacy data recorded according to AACR2 and encoded in MARC, particular attention is paid to MARC data and to AACR2, as well as issues related to RDA. These recommendations will be useful to those creating or guiding the development of discovery interfaces that will include music materials. Furthermore, because the document identifies areas where deficient data creates particular problems for discovery, those inputting or creating standards for data can use this document to identify areas where there is particular need for fuller, more consistent data.

Three appendixes compile technical details of the specific indexing recommendations in spreadsheets. The appendixes should be used in conjunction with the full document, particularly because in some cases multiple options are given for addressing discovery needs, and the extended discussion is contained in the document proper. The spreadsheets are not exhaustive mapping documents; their scope is the same as the document: areas which are music-specific or particularly important for music.

The music discovery requirements document group welcomes comments, questions, and other feedback on the document. The group is particularly interested in hearing how the document is being used, and in working with vendors and developers to create discovery interfaces optimized for the unique needs of music materials.

Download the music discovery requirements document at: http://committees.musiclibraryassoc.org/ETSC/MDR

New version of NISO circulation interchange protocol (NCIP) published

The NISO has announced the publication of the two-part American National Standard on the “NCIP”, ANSI/NISO Z39.83. NCIP addresses the need for interoperability among disparate circulation, interlibrary loan, consortial borrowing, and self-service applications by standardizing the exchange of messages between and among computer-based applications. Part 1 of the standard defines the “Protocol” and Part 2: “Implementation Profile” provides a practical implementation structure. The NCIP protocol is widely supported in integrated library systems (ILS) and resource sharing software.

“This latest edition of NCIP, version 2.02, incorporates implementers’ feedback and experience into the standard with changes that improve the usefulness and practicality of the various services,” explained Mike Dicus, Product Manager at Ex Libris Group and Co-chair of the NCIP Standing Committee. “One of the larger changes in 2.02 is the addition of a ‘Lookup Item Set’ service. This new service allows an initiator to query with a single request a set of items that may share some kind of relationship, such as multiple volumes of a book set. Additionally, ‘Bibliographic Record Id’ has been made repeatable within ‘Bibliographic Description’. This makes it possible, for example, for an initiator to send an ‘Accept Item’ message passing both an OCLC number and a Library of Congress Catalog Number. And ‘Request Item’ has been changed so that it now accepts both ‘Bibliographic Record Id’ and ‘Item Id’, and both elements are repeatable. In earlier versions, ‘Request Item’ accepted either a single ‘Bibliographic Record Id’ or a single ‘Item Id’.”

“In addition to the standard, the NCIP Standing Committee has made available supporting tools and documentation to aid in implementation,” stated Rob Walsh, representative for EnvisionWare, the Maintenance Agency for NCIP. “An XML schema is available that matches the implementation profile defined in Part 2 of the standard. The document ‘Introduction to NCIP’ provides librarians and other implementers with a basic introduction to NCIP and links to sources of additional information about the standard. The ‘NCIP Core Message Set’ defines a minimal set of nine messages (out of the full set of 45) that supports the majority of the current functionality for resource sharing and self-service applications and provides a simpler starting point for new implementers. And an NCIP Implementer Registry collects information about vendors’ implementations-specifically which versions and which messages are supported.”

“This new version of NCIP illustrates the responsiveness of the NCIP Standing Committee to the needs of the libraries and system vendors who are using the standard,” asserts Todd Carpenter, NISO Executive Director. “The NCIP Standing Committee has semiannual meetings and monthly conference calls to discuss implementation practices and ways of promoting the standard. We encourage users of the new version to share their experiences with the committee.”

The NCIP standard and the supporting tools and documentation are freely available from the NCIP Workroom on the NISO web site: www.niso.org/workrooms/ncip/

E-books in libraries: briefing document from Berkman Center for Internet & Society

“E-books in libraries” is a briefing document developed with helpful inputs from industry stakeholders and other practitioners in preparation for the “E-books in libraries” workshop, hosted on February 24, 2012, by the Berkman Center for Internet & Society at Harvard University, with the generous support of the Charles H. Revson Foundation.

The “E-books in libraries” workshop was convened as part of a broader effort to explore current issues associated with digital publishing business models and access to digitally-published materials in libraries. Workshop attendees, including representatives from leading publishers, libraries, academia, and other industry experts, were invited to identify key challenges, share experiences, and prioritize areas for action. This document, authored by David O’Brien, Urs Gasser, and John Palfrey, contains some updates reflecting new developments following the February workshop (up to June 2012), and is intended to build on and continue that discussion with a broader audience, and encourage the development of next steps and concrete solutions.

Beginning with a brief overview of the history and the current state of the e-book publishing market, the document traces the structure of the licensing practices and business models used by distributors to make e-books available in libraries, and identifies select challenges facing libraries and publishers. Where possible, the authors have made an effort to incorporate stakeholder perspectives and real-world examples to connect analysis to the actual questions, issues, and challenges that arise in practice. The document concludes with a number of informative resources – including news articles, whitepapers, stakeholder and trade association reports, and other online sources – that might inform future conversations, investigations, pilot projects, and best practices in this space.

The topics presented in this briefing come at an important moment for the publishing industry, and in particular the e-book market, both of which have been rapidly evolving over the last several years. These changes are, in turn, affecting the models used by publishers’ horizontal and vertical business partners, such as libraries and distributors. While the authors have endeavored to provide accurate information within this document, the dynamic flux of the industry can make it difficult to accurately capture a comprehensive snapshot of its current state. For instance, during the course of our initial research the authors found that some information published as recently as September 2011 had already become outdated; other salient information is not made publicly available for competitive reasons. Please note that the authors consider this to be a working document, which they hope to develop further as information changes and the issues evolve.

Download paper (SSRN): http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2111396

Aligning national approaches to digital preservation: new publication from Educopia

Katherine Skinner, Executive Director, Educopia Institute, has announced the publication of “Aligning National Approaches to Digital Preservation”. On May 23-25, 2011, more than 125 delegates from more than 20 countries gathered in Tallinn, Estonia, for the “Aligning National Approaches to Digital Preservation” conference. At the National Library of Estonia, this group explored how to create and sustain international collaborations to support the preservation of our collective digital cultural memory. Organized and hosted by the Educopia Institute, the National Library of Estonia, the US Library of Congress, the University of North Texas, and Auburn University, this gathering established a strong foundation for future collaborative efforts in digital preservation.

This publication contains a collection of peer-reviewed essays that were developed by conference panels and attendees in the months following ANADP. Rather than simply chronicling the event, the volume deliberately broadens and deepens its impact by reflecting on the ANADP presentations and conversations and establishing a set of starting points for building a greater alignment across digital preservation initiatives. Above all, it highlights the need for strategic international collaborations to support the preservation of our collective cultural memory.

This guide is written with a broad audience in mind that includes librarians, archivists, scholars, curators, technologists, lawyers, researchers, and administrators at many different types of memory organizations.

“Aligning National Approaches to Digital Preservation” is the second of a series of volumes edited by Katherine Skinner (Series Editor) and published by the Educopia Institute describing successful collaborative strategies and articulating new models that may help memory organizations work together for their mutual benefit.

Readers may access “Aligning National Approaches to Digital Preservation” as a freely downloadable pdf and/or as a print publication for purchase.

To download or order the book, visit: http://educopia.org/publications/ANADP

Educopia Institute: http://educopia.org/

Building the grateful dead archive online: videos from CNI spring 2012 meeting

“Building the grateful dead archive online: the golden road to unlimited devotion”, a project briefing session presented at the Coalition for Networked Information’s (CNI) spring 2012 membership meeting, by Virginia Steel and Robin Chandler of the University of California, Santa Cruz, is now available on CNI’s YouTube and Vimeo video channels.

YouTube: http://youtu.be/igPVcJvb3YA; Vimeo: https://vimeo.com/44839434

The Grateful Dead Archive (GDA) at UC Santa Cruz represents one of the most significant popular culture collections of the twentieth century and documents the band’s activity and influence in contemporary music from 1965 to 1995. At CNI’s spring 2012 membership meeting, UC Santa Cruz University Librarian Virginia Steel and Project Manager Robin Chandler discussed the particular challenges of merging a traditional archive with a socially constructed one.

Other videos available from the spring 2012 CNI meeting include:

“Reinventing the Research University to Serve a Changing World” (J. Duderstadt): www.cni.org/events/membership-meetings/past-meetings/spring-2012/plenary-sessions/#opening

“Key Trends in Teaching & Learning: Aligning What We Know About Learning to Today’s Learners” (P. Long): http://youtu.be/8DtRh4PuUco

“Archiving Large Swaths of Digital Content: Lessons from Archiving the Occupy Movement” (Besser et al.): http://youtu.be/CZbvCorGCow

“National Status of Data Management: Current Research in Policy and Education” (Halbert et al.): http://youtu.be/mFq2l4bzn-Y

To see all videos available from CNI, visit CNI’s channels on YouTube (www.youtube.com/cnivideo) and Vimeo (http://vimeo.com/channels/cni).

CNI: www.cni.org/

Wikipedia and libraries: what’s the connection? Webinar recording now available

In this webinar, OCLC Research Wikipedian in Residence Max Klein discusses what’s happened between Wikipedia and libraries in the past and what it means for the future. In addition, he explains the connection between Wikipedia and libraries, discusses the variety of Wikipedian in Residence positions and the opportunities for libraries working with Wikipedia, as well as describing how OCLC Research is working to integrate Authority Control into Wikipedia. He also presents “Behind the Secret Door: Tips and Tricks for Librarians using Wikipedia.”

“Wikipedia and Libraries: What’s the Connection?” Webinar Page: www.oclc.org/research/events/2012/07-31.html

OCLC survey among British, German and Dutch librarians shows changing priorities

A survey conducted by OCLC in spring of 2012 among librarians from the UK, Germany and The Netherlands shows that practitioners expect library usage to change considerably. About three-quarters expected a rise in online visits within the next year, and two-thirds of those who responded anticipate a change in the primary reason to visit the library in the next five years.

“Libraries: A Snapshot of Priorities and Perspectives” is now available on the OCLC web site, where reports for the UK, Germany and The Netherlands can be downloaded.

The increase in online visits that is expected by 71-85 percent of librarians (percentages vary by country) contrasts dramatically with their expectations of low growth in physical visits in the next 12 months. Demonstrating perhaps those users will continue to rely on libraries for getting their information, but not necessarily by coming through the library doors.

The primary reason for library use will also change in the next five years, according to 59-71 percent of responding librarians. With access to online databases and journals increasing in popularity as a primary reason in 2017 for “visits,” the survey confirms the view that the borrowing of physical items is still the primary reason for visiting libraries today.

As a library cooperative, OCLC initiates in-depth studies and topical surveys regularly to help libraries better understand issues and trends that affect librarianship and help plan for the future. “This is the first time we conducted a survey specifically among European librarians, so that the report can focus on the findings that are relevant for this particular part of the world,” said Eric van Lubeek, Managing Director of OCLC EMEA.

According to the survey, among the top priorities for libraries to focus their activities are delivering eContent, forming community partnerships, the library’s role in the future of higher education, visibility of the library’s collection and demonstration of the library value to its funders.

There were 279 librarians from the UK, 143 librarians from Germany and 152 librarians from The Netherlands who participated in the survey held among public and academic library staff and management. OCLC conducted a similar study among librarians in the USA in 2011.

Snapshot reports from all these surveys can be found on the OCLC web site: www.oclc.org/reports

Privacy and data management on mobile devices: new report from Pew Research

More than half of mobile application users have uninstalled or avoided certain apps due to concerns about the way personal information is shared or collected by the app, according to a nationally representative telephone survey conducted by the Pew Research Center’s Internet & American Life Project.

In all, 88 percent of US adults now own cell phones, and 43 percent say they download cell phone applications or “apps” to their phones. Among app users, the survey found:

  • 54 percent of app users have decided to not install a cell phone app when they discovered how much personal information they would need to share in order to use it.

  • 30 percent of app users have uninstalled an app that was already on their cell phone because they learned it was collecting personal information that they did not wish to share.

Taken together, 57 percent of all app users have either uninstalled an app over concerns about having to share their personal information, or declined to install an app in the first place for similar reasons.

“As mobile applications become an increasingly important gateway to online services and communications, users’ cell phones have become rich repositories that chronicle their lives,” said Mary Madden, Research Associate for the Project and a co-author of the report. “The way a mobile application handles personal data is a feature that many cell phone owners now take into consideration when choosing the apps they will use.”

Outside of some modest demographic differences, app users of all stripes are equally engaged in these aspects of personal information management. Owners of both Android and iPhone devices are also equally likely to delete (or avoid entirely) cell phone apps due to concerns over their personal information.

In addition to these measures of app-specific behaviors, the Pew Internet Project also asked about three general activities related to personal data management on cell phones. Among all those who own a cell phone of any kind, the survey found that:

  • 41 percent of cell owners back up the photos, contacts and other files on their phone so that they have a copy in case their phone is ever broken or lost;

  • 32 percent of cell owners have cleared the browsing history or search history on their phone; and

  • 19 percent of cell owners have turned off the location tracking feature on their cell phone because they were concerned that other individuals or companies could access that information.

Even as cell owners take steps to maintain control over their personal data in the context of mobile phones, the physical devices themselves can occasionally fall into the wrong hands. Some 31 percent of cell owners have lost their cell phone or had it stolen, while 12 percent of cell owners say that another person has accessed their phone’s contents in a way that made them feel that their privacy had been invaded. Despite the fact that backing up one’s phone is typically conducted as a safeguard in the event that the phone is lost or stolen, cell owners who have actually experienced a lost or stolen phone are no more likely than average to backup the contents of their phone.

The youngest cell phone users (those ages 18-24) are especially likely to find themselves in each of these situations. Some 45 percent of cell owners in this age group say that their phone has been lost or stolen, and 24 percent say that someone else has accessed their phone in a way that compromised their privacy.

Smartphone owners are especially vigilant when it comes to mobile data management. Six in ten smartphone owners say that they back up the contents of their phone; half have cleared their phone’s search or browsing history; and one-third say that they have turned off their phone’s location tracking feature.

Yet despite these steps, smartphone owners are also twice as likely as other cell owners to have experienced someone accessing their phone in a way that made them feel like their privacy had been invaded. Owners of smartphones and more basic phones are equally likely to say their phone has been lost or stolen.

“The rise of the smartphone has dramatically altered the relationship between cell owners and their phones when it comes to monitoring and safeguarding their personal information,” said Aaron Smith, a Research Associate with the Project and report co-author. “The wealth of intimate details stored on smartphones makes them akin to the personal diaries of the past – the information they contain is hard to replace if lost, and potentially embarrassing in the wrong hands.”

This Pew Internet report is based on a survey conducted from March 15-April 3, 2012 among 2,254 adults ages 18 and over, including surveys in English and Spanish and on both landline and cell phones. The overall sample has a margin of error of plus or minus 2.4 percentage points. Some 1,954 cell users were interviewed in this sample and many of the results published here involve that subset of users. The margin of error for data involving cell users is plus or minus 2.6 percentage points.

Read the full report: http://pewinternet.org/Reports/2012/Mobile-Privacy.aspx

Saving and sharing the American Geographical Society Library’s historic nitrate negative images

With generous support from the National Endowment for the Humanities (NEH), the American Geographical Society Library (AGSL) of the UWM libraries has been able to institute a two-year, $315,000 grant project. The scope of the project is to re-house, scan, create metadata for, and preserve on a long-term basis, the approximately 68,000 nitrate negatives in its photography collection.

In 2010 the NEH provided a $315,000 grant in support of a two-year project to preserve and provide access to the AGSL’s 70,000 nitrate negatives. These invaluable images span every continent with the exception of Antarctica and document a global range of peoples, cultures, and landscapes as seen through the eyes of geographers, adventurers and professional photo journalists.

The AGS Library at the University of Wisconsin, Milwaukee is the former research library of the AGS, which was founded in the early 1850s to promote the collection of geographical information and to establish and maintain a library with a collection of maps, charts and instruments. Through the years, the AGS Library succeeding in building a distinguished photographic collection, with images dating from the mid-nineteenth century to the present, which included a sizable number of nitrate negatives. Cellulose nitrate film, introduced in 1889, was an important innovation in photography and was popular for well over half a century. It is, however, a volatile and flammable material, and it was clear that the AGSL’s deteriorating negatives required immediate attention.

The NEH-funded project enabled the AGS Library to rehouse, scan, create metadata for, publish online and provide cold storage for these historic images.

To view the results of this project: www4.uwm.edu/libraries/digilib/NEHgrant/

Related articles