Emerald Group Publishing Limited
New & Noteworthy
Article Type: New & noteworthy From: Library Hi Tech News, Volume 31, Volume 31,
American Library Association launches educational 3D printing policy campaign
The American Library Association (ALA) has announced the launch of “Progress in the Making”, a new educational campaign that will explore the public policy opportunities and challenges of three-dimensional (3D) printer adoption by libraries. The association has released “Progress in the Making: An Introduction to 3D Printing and Public Policy”, a tip sheet that provides an overview of 3D printing, describes a number of ways libraries are currently using 3D printers, outlines the legal implications of providing the technology and details ways that libraries can implement simple yet protective 3D printing policies in their own libraries.
“As the percentage of the nation’s libraries helping their patrons create new objects and structures with 3D printers continues to increase, the legal implications for offering the high-tech service in the copyright, patent, design and trade realms continues to grow as well”, said Alan S. Inouye, Director of the ALA Office for Information Technology Policy (OITP). “We have reached a point in the evolution of 3D printing services where libraries need to consider developing user policies that support the library mission to make information available to the public. If the library community promotes practices that are smart and encourage creativity, it has a real chance to guide the direction of the public policy that takes shape around 3D printing in the coming years”.
Over the next coming months, ALA will release a white paper and a series of tip sheets that will help the library community better understand and adapt to the growth of 3D printers, specifically, as the new technology relates to intellectual property law and individual liberties.
This tip sheet is the product of collaboration between the Public Library Association (PLA), the OITP and United for Libraries and coordinated by OITP Information Policy Analyst Charlie Wapner.
View the tip sheet (pdf):
Adobe Digital Editions 4 gathers ebook readers’ data, transmits it unencrypted
A posting on Digital Book Wire, the news blog of the Digital Book World Web site, summarizes recent reports on the gathering and transmission date from readers using the Adobe Adobe Digital Editions 4 ebook platform:
Adobe confirms some details of recent reports by The Digital Reader and Ars Technica that Adobe Digital Editions 4, the latest version of the widely used ebook platform, is gathering extensive data on its users’ ebook reading habits.
According to Nate Hoffelder at The Digital Reader, “Adobe is gathering data on the ebooks that have been opened, which pages were read, and in what order”.
Reached for comment, Adobe confirms that those data gathering practices are, indeed, in place. “Adobe Digital Editions allows users to view and manage eBooks and other digital publications across their preferred reading devices – whether they purchase or borrow them”, Adobe said in a statement this afternoon. The statement continues:
Update: Hoffelder reported that Adobe Digital Editions appeared to be gathering information on his entire ebook library, not just the titles viewed through Adobe Digital Editions. In a follow-up communication with Adobe, which included the file Hoffelder posted to support this suspicion, the company reiterated its earlier statement that “information is solely collected for the eBook currently being read by the user and not for any other eBook in the user’s library or read/available in any other reader”.
According to the latest reports, that data appear to be delivering to Adobe’s servers as clear text, raising concerns that third parties could easily gain access to it.
Read the full digital book wire blog entry at:
Blog post from the digital reader:
Article at Ars Technica:
Institute of Museum and Library Services lifecycle management of Electronic Theses and Dissertations project resources now available
Matt Schultz, currently serving as the Program Manager for the MetaArchive Cooperative, has announced officially the successful conclusion of the Lifecycle Management of Electronic Theses & Dissertations (ETDs) project, which was funded by the Institute of Museum and Library Services (IMLS).
Deliverables of the project include:
Guidance Documents for Lifecycle Management of ETDs – developed by the University of North Texas Libraries, together with the Educopia Institute, the MetaArchive Cooperative, the Networked Digital Library of Theses and Dissertations (NDLTD) and the libraries of Virginia Tech, Rice University, Boston College, Indiana State University, Pennsylvania State University and University of Arizona, to promote best practices and to increase the capacity of academic libraries to reliably preserve ETDs.
ETD Lifecycle Management Tools Manual – provides resources and instruction to cover five major areas of lifecycle curation for ETDs, including tools for virus checking, file format identification, preservation metadata, ETD submission and reference link archiving.
ETD Lifecycle Management Workshop – a modular set of Creative Commons (CC-BY 4.0) licensed materials that provides both conceptual and practical information for both the individual professional/practitioner and the broader institution to help improve the overall curation and preservation of ETD content.
Each of the above deliverables is freely available under open source and Creative Commons licenses, and can be obtained on the Educopia Institute Web site.
Guidance documents for lifecycle management of ETDs: http://www.educopia.org/publishing/gdlmetd
ETD lifecycle management tools manual: http://www.educopia.org/research/etd/etdlmtm
ETD lifecycle management workshop: http://www.educopia.org/research/etd/workshop
IMLS lifecycle management of ETDs home: http://www.educopia.org/research/etd
Information standards quarterly issue on open access infrastructure
The National Information Standards Organization (NISO) has announced the publication of a special themed issue of Information Standards Quarterly (ISQ) on the topic of open access infrastructure. As Guest Content Editor, Liam Earney, Head of Library Support Services, Jisc, notes, “2013 seems to have been a watershed for open access (OA). Driven by a number of policy announcements from funding bodies and governments worldwide, the question is no longer whether open access will or should happen, but rather how will it be implemented in a sustainable way”. Earney has gathered in this issue of ISQ a wealth of insights from a wide variety of viewpoints – publishers, funders, universities, intermediaries, standards bodies and open access experts on where we are and where we are going with a sustainable OA infrastructure.
In the feature article, ISQ Managing Editor Cynthia Hodgson highlights, through a series of interviews with experts in the OA arena, some of the major areas of infrastructure that are needed, including institutional policies, compliance tracking and reporting, publishing tools, new economic models and licensing and sustainability. A glossary of key OA terms used throughout the issue is included.
The in-practice section describes the OA experiences of both a library and a publisher. Martin Moyle, Catherine Sharp, and Alan Bracey (University College London) describe the research library’s perspective in The Role of Standards in the Management of Open Access Research Publications. The UCL Library is responsible for both the green OA repository and the management of gold OA publications within its institution. David Ross (SAGE Publishing) in A Publisher’s Perspective on the Challenges of Open Access describes the experiences of SAGE in responding to “this evolving, competitive landscape”. SAGE publishes on behalf of almost 300 learned societies, associations and institutes and in 2013: “adopted one of the most liberal policies with regard to the authors accepted manuscript (AAM), allowing authors to post this in an institutional repository or their personal website immediately, with no embargo”.
Two recent and important project initiatives in this space were launched in 2013: the SHared Access Research Ecosystem (SHARE) by the Association of Research Libraries (ARL), the Association of American Universities (AAU) and the Association of Public and Land-grant Universities (APLU); and the Clearinghouse for the Open Research of the USA (CHORUS), a cooperative effort involving publishers, funding agencies, technology and resource partners and other organizations involved in scholarly publishing for the public benefit. In The Need for Research Data Inventories and the Vision for SHARE, Clifford Lynch describes the potential role of SHARE in the overall scheme of managing research data. “Most fundamentally, SHARE functions as an inventory of research data that is produced by scholars within the higher education community”. Alice Meadows and Howard Ratner explain how CHORUS Helps Drive Public Access. “CHORUS supports public access to federally funded research by acting as an information bridge, linking the public to freely accessible journal articles directly on publisher platforms, where the articles can be read and preserved in their scholarly context”.
Cameron Neylon, Ed Pentz, and Greg Tananbaum, the Co-chairs of NISO’s Access and Licensing Indicators Working Group provide a preview of the forthcoming recommended practice on Standardized Metadata Elements to Identify Access and License Information. The goal of the project was to identify a set of metadata elements to describe both the accessibility of a specific article and the available reuse rights.
“What comes across strongly from these articles is the complexity and interdependency of the issues that we face,” states Earney. “What also comes across strongly is the importance that all the authors place on the development and adoption by everyone involved of standards-based approaches to overcoming the challenges for a sustainable open access infrastructure”.
“NISO began issuing Information Standards Quarterly electronically in open access in 2011”, states Todd Carpenter, NISO’s Executive Director. “Recognizing some of the gaps in the current OA infrastructure, we launched the Access and Licensing Indicators project in 2013 to develop a recommended practice for open access metadata and indicators. Those recommendations should address some of the infrastructure gaps that are described in this issue of ISQ”.
ISQ is available in open access in electronic format on the NISO website. Both the entire issue on open access infrastructure and the individual articles may be freely downloaded. Print copies of ISQ are available by subscription and as print on demand.
To access the free electronic version, visit: http://www.niso.org/publications/isq/2014/
Open Access Repository Ranking of German repositories launches
The Open Access Repository Ranking (OARR) was launched September 9, 2014 during the Open Access Tage 2014 conference held in Cologne, Germany. The OARR is a ranking that lists all German open access repositories according to a metric that evaluates a certain set of criteria that are summarized in categories. This metric is a synthesis of different schemes and studies that survey and describe open access repositories. OARR is based on a metric that is open, transparent and developed in accordance with the open access community. Because OARR tries to keep track of ever-changing technical and political developments in the sphere of open access, the metric is not written in stone and will be updated iteratively. Unlike other rankings that focus on search engine optimization (SEO) and the size of an open access repository, OARR represents a benchmark of what attributes each open access repository should have to provide the best possible service to its users.
The ranking is planned to be published annually at the Open Access Tage conference in September. The preceding January each open access repository that meets the OARR definition will be contacted to submit their data via an online form. This procedure is supposed to minimize the amount of potential errors in indexing by asking the respective repository managers, as they are the most reliable source of information. The OARR team reviews all submissions, assuring its quality and validity.
OARR is a research project based at the Information Management Department of Professor Peter Schirmbacher at the Berlin School of Library and Information Science (BSLIS), Humboldt-Universität zu Berlin (HU Berlin). OARR cooperates with the Bielefeld Academic Search Engine (BASE) at Bielefeld University and is supported by the Deutsche Initiative für Netzwerkinformation e. V. (DINI) working group on “Electronic Publishing”.
OARR frequently asked questions (FAQs): http://repositoryranking.org/?page_id=997
Digital Preservation Coalition digital preservation handbook: survey results and contents outline published
The Digital Preservation Coalition (DPC) and Charles Beagrie Ltd have announced the release two important documents which will form the foundations of the new edition of the DPC Digital Preservation Handbook: the results of a major survey into audience needs, and the first full outline of content.
“We are very keen to make sure that the new edition of the handbook fits with people’s actual needs so we were very encouraged by the substantial response to the consultation document which we sent out before summer,” explained Neil Beagrie who is editor of the new edition of the handbook. “We estimate that the digital preservation community numbers around 1,500 people in total: and there were 285 responses to the survey”.
“It a very large sample of the community but it’s also reassuringly diverse. There’s a strong representation from higher education and public sector agencies but there’s also a sizeable group from industry, from charities as well as museums and community interest groups. When asked if they would use the handbook, not a single respondent said no”.
“The survey has directly informed the contents of the new handbook,” explained William Kilbride, Executive Director of the DPC. “We started with an idea of the gaps and the many parts that had become outdated since the original handbook was published. So we invited users to tell us what they wanted and how they wanted it – both in terms of content and presentation. The project team has responded thoughtfully to these requests so I am confident that the resulting content outline is tailored to people’s needs. Even so, we remain open to suggestions and comments. This will help ensure that the handbook remains relevant for many years to come”.
The retention of previous content and the proposed additions are based on the results of the Handbook user consultation and survey. All sections of the Handbook will be revised and its functionality significantly enhanced to create an entirely new online edition. The fine detail comments received will guide content of the sub-sections. Detailed content preservation case studies will also be featured in the new edition of the Handbook.
Report on the preparatory user consultation on the 2nd edition of the digital preservation handbook:
Draft Outline of the 2nd Edition of the Digital Preservation Handbook: http://www.dpconline.org/component/docman/doc_download/1306-handbook-new-contents
National Records of Scotland publishes its digital preservation strategy
In late September National Records of Scotland (NRS) published its Digital Preservation Strategy. The strategy looks back at work done to date and defines the key requirements for the NRS. It sets out what NRS will do over the next five years to establish a fully functioning sustainable digital repository that encompasses policies and procedures for reliable, controlled access to secure archival digital records. The value of partnership working and the leadership role of the NRS within the Scottish archives community are considered; also highlighted are the value of national and international partnerships.
NRS welcomes feedback on the Strategy – comments or questions may be sent to mailto:Electronic.Records@nas.gov.uk.
NRS Digital Preservation Strategy (411 KB, PDF): http://www.nrscotland.gov.uk/files/record-keeping/nrs-digital-preservation-strategy.pdf
KODAK Alaris scanning system delivers scanned documents and photos directly to genealogy site
Fascination with tracing family histories shows no signs of slowing down, which means tools that simplify and enhance the process are increasingly in demand. For example, popular genealogy research company, Ancestry.com, has grown to approximately 2.7 million subscribers. Resources such as FamilySearch, a service provided by The Church of Jesus Christ of Latter-day Saints (LDS), feature a massive global collection of freely accessible genealogical records, photos and stories. FamilySearch also hosts a physical library at its headquarters in Salt Lake City, Utah and has more than 4,700 family history centers in 125 countries around the world.
Digitizing photos and documents, sharing and preserving them on the FamilySearch website, and enabling the general public to enrich the data by providing their own pictures and stories just got a lot easier. Recently, E-Z Photo Scan, a leading reseller for Kodak Alaris, worked closely with FamilySearch to upgrade its equipment to three state-of-the-art KODAK Picture Saver Scanning System PS50 and PS80 models. The new scanners feature a specially designed transport which treats fragile photos and documents with extra-gentle care. The scanners handle both sides of each photo in a single pass at speeds up to 85 prints per minute.
“One situation we needed to overcome was that many photos brought to our family history centers and headquarters are in very old albums,” said Scott Lloyd, Desktop Engineer for Patron and Partner Services at FamilySearch. “It would be dangerous to try to take brittle photos that might be 100 years old out of album pages, so we were looking for a good solution to this issue”.
The KODAK Picture Saver Scanning Systems include a legal-sized flatbed accessory that permits scanning of entire photo album pages without having to remove individual pictures or take off a plastic sheet cover. The KODAK Photo Selector Accessory automatically extracts individual images from a composite image, such as a multi-photo album page. It then saves each photo as a separate digital file.
To further enhance the process of uploading treasured family records directly into a user’s account, Kodak Alaris and E-Z Photo Scan introduced the KODAK WebUploader for FamilySearch. The WebUploader is a powerful tool to view, select and seamlessly uploads collections immediately after being digitized. Plus, the WebUploader can easily upload files to be saved in other locations, such as a folder on the user’s hard drive.
FamilySearch is currently operating multiple KODAK PS80 Picture Saver Scanning Systems. “Individuals are bringing in 400 to 500 pictures and they are scanned with outstanding quality in 15 minutes or so”, Lloyd says. “We have now ordered 20 more PS80 Systems through E-Z Photo Scan to send out to additional centers because they are so easy to use and our customers are very satisfied with the quality of the captured images. The new WebUploader will make the process even more efficient”.
While the WebUploader is designed specifically for FamilySearch users, anyone who has pictures and documents that need to be uploaded can take advantage of the Picture Saver Scanning Systems’ Smart Touch functionality, which enables users to directly scan and send files to destinations such as SharePoint, Box, Evernote and other online repositories.
See a video on the WebUploader: https://www.youtube.com/watch?v=lZ_-RZ-_Ws8
KODAK picture saver scanning systems: http://www.kodakalaris.com/go/picturesavernews
KODAK Alaris document imaging: http://kodakalaris.com/go/dinews
International Image Interoperability Framework releases new versions of APIs
Access to image-based resources is fundamental to research, scholarship and the transmission of cultural knowledge. Digital images are a container for much of the information content in the Web-based delivery of images, books, newspapers, manuscripts, maps, scrolls, single sheet collections and archival materials. Yet much of the Internet’s image-based resources are locked up in silos, with access restricted to bespoke locally built applications.
A growing community of the world’s leading research libraries and image repositories has embarked on an effort to collaboratively produce an interoperable technology and community framework for image delivery.
The International Image Interoperability Framework (IIIF) has the following goals:
to give scholars an unprecedented level of uniform and rich access to image-based resources hosted around the world;
to define a set of common application programming interfaces that support interoperability between image repositories; and
to develop, cultivate and document shared technologies, such as image servers and Web clients, that provide a world-class user experience (UX) in viewing, comparing, manipulating and annotating images.
The IIIF community has announced the release of the second major version of its specifications intended to provide a shared layer for dynamic interactions with images and the structure of the collections and objects of which they are part. These application programming interfaces (APIs) are used in production systems to enable cross-institutional integration of content, via mix and match of best of class front end applications and servers.
This release adds additional functionality derived from real-world use cases needed by partners within the community, and reflects more than a year of experience with the previous versions and significant input from across the cultural heritage community. It also formalizes many of the aspects that were implicit in the initial versions and puts into place a manageable framework for sustainable future development. Detailed change notes are available.
The specifications are available at:
Accompanying the release of the specifications is a suite of community infrastructure tools, including reference implementations of all versions of the Image API, collections of valid and intentionally invalid example Presentation API resource descriptions, plus validators for both APIs. Production-ready software is available for the full Image API stack, with server implementations in both Loris and IIP Server, and rich client support in the popular Open Seadragon.
There will be a rollout and dissemination event on October 20, 2014, at the British Library to celebrate this release and engage with the wider community. Feedback, comments and questions are welcomed on the discussion list.
IIIF discussion list: https://groups.google.com/forum/%23!forum/iiif-discuss
IIIF image and presentation API demos: http://iiif.io/apps-demos.html
International image interoperability framework: http://iiif.io/
re3data.org updates metadata schema for global Registry of Research Data Repositories
The Registry of Research Data Repositories, re3data.org, is a global registry of research data repositories that has identified and described over 900 data repositories from around the world covering all academic disciplines. It presents repositories for the permanent storage and access of data sets to researchers, funding bodies, publishers and scholarly institutions. re3data.org promotes a culture of sharing, increased access and better visibility of research data. The registry went live in autumn 2012 and is funded by the German Research Foundation (DFG).
Data repositories are currently described in the registry using the Description of Research Data Repositories Version 2.1 metadata schema that was published in December 2013. An update to the schema, Version 2.2, has been proposed based on experience and community input. The new version includes additional properties and controlled terms as well as new and revised definitions.
Further review and comments are being accepted through October 20, 2014, by commenting on the blog entry (http://www.re3data.org/2014/09/rfc-schema-version-2-2/) or by emailing mailto:email@example.com. Input from the community will be incorporated into the final release, which is expected in December 2014.
Project partners in re3data.org are the Berlin School of Library and Information Science at the Humboldt-Universität zu Berlin, the Library and Information Services department (LIS) of the GFZ (Deutsches GeoForschungsZentrum) German Research Centre for Geosciences, and the KIT Library at the Karlsruhe Institute of Technology (KIT). The partners are actively involved in the German Initiative for Network Information (DINI) and current research data management activities.
The draft schema (version 2.2) can be accessed at: http://doi.org/10.5281/zenodo.11748
Search re3data.org for repositories: http://service.re3data.org/search/
Registry of research data repositories, re3data.org: http://www.re3data.org/
BitCurator 1.0 software released; BitCurator Consortium launches
The BitCurator project has announced the release of BitCurator 1.0, a free and open-source digital forensics software environment for libraries, archives and museums (LAMs) to acquire and process born-digital materials. The BitCurator environment can be installed as a Linux environment; run as a virtual machine on top of other operating systems (Windows, Mac and Unix/Linux); or run as individual software tools, packages, support scripts and documentation. The software release is the culmination of a three-year (2011-2014) collaborative effort between the School of Information and Library Science (SILS) at the University of North Carolina at Chapel Hill and the Maryland Institute for Technology in the Humanities (MITH) at the University of Maryland. The project was made possible through two phases of funds from the Andrew W. Mellon Foundation.
“This is an exciting milestone”, says Christopher (Cal) Lee, principal investigator for the BitCurator project and associate professor at SILS. “Although there are already numerous collecting institutions across the globe that are using the BitCurator environment, release of version 1.0 is a further sign of the software’s maturity”.
Matthew Kirschenbaum, co-principle investigator for the project and associate director at MITH, concurs. “There is now widespread recognition that digital forensics methods and tools have a significant role in the cultural heritage sector. With the release of BitCurator 1.0, collecting professionals now have convenient access to a range of open source digital forensics tools to assist in the processing of born-digital and hybrid collections”.
Born-digital materials, or materials that originate in digital form, include 3 1/2 “and 5 1/4” floppy disks, Zip disks, CD-ROMs, and DVDs.
Among its many functionalities, the BitCurator environment allows individuals to create forensic disk images, perform data triage tasks, analyze and report on file systems, identify personal and sensitive information (such as social security numbers or credit card information), and enables the capture and export of technical metadata.
“The challenges involved in preserving digital media and the content stored on them are numerous,” says Jennie Knies, Manager of Digital Programs and Initiatives for the University of Maryland Libraries. “BitCurator is a fully contained system that contains easy-to-use interfaces to allow for some standard activities necessary for copying, reading, and curating digital media”.
With the completion of the BitCurator project, support for the BitCurator environment and associated user community is shifting to the BitCurator Consortium (BCC), an independent community-led membership association that will serve as the host and center of administrative, user and community support for the software. Charter members of the consortium include Duke University, Stanford University, New York University, the University of Maryland and the University of North Carolina Chapel Hill.
“Because it was initially developed as a joint effort between the University of North Carolina, Chapel Hill and the University of Maryland’s own digital humanities center (MITH), we feel strongly that the UMD Libraries should be a charter member of the BitCurator Consortium”, says Knies. “Future development of BitCurator can only serve to help us better do our jobs as stewards and curators of information”.
“Managing born-digital acquisitions is becoming a top concern in research libraries, archives, and museums worldwide”, shares co-founder Dr. Christopher (Cal) Lee. “The BCC now provides a crucial hub where curators can learn from each other, share challenges and successes, and together define and advance technical and administrative workflows for born-digital content”.
Institutions responsible for the curation of born-digital materials are invited to become members of the BCC. New members will join an active, growing community of practice and gain entry into an international conversation around this emerging set of practices. Other member benefits include voting rights; eligibility to serve on the BCC Executive Council and Committees; professional development and training opportunities; subscription to a dedicated BCC member mailing list; and special registration rates for BCC events.
BitCurator software, documentation and instructional materials can be downloaded from: http://wiki.bitcurator.net
WorldCat discovery adds new features, introduces WorldCat discovery API beta
OCLC has announced the addition of new features in WorldCat Discovery Services that provide updated displays to save time for the user and provide more meaningful results. WorldCat Discovery Services is an integrated suite of cloud-based applications that enables people to search WorldCat and also discover more than 1.5 billion electronic, digital and physical resources in libraries around the world.
New features added in September include:
Search and display of local bibliographic data: Now, libraries who have added information such as notes, genre designations, subject headings, authors and uniform titles to OCLC master records over the years will be able to search and display this valuable information for staff and users.
Control for full-text links: WorldCat Discovery now uses the WorldCat knowledge base to determine a library’s holdings for articles. Using the WorldCat knowledge base increases the accuracy of getting to the needed content and provides libraries additional control in terms of article coverage and what full-text links will display for the user.
New ways to use facets: Several changes to facets will help searchers quickly refine searches using the most popular facets. A new “top 6” format display features the six most-used formats, with the option to display additional format facets. In addition, most facets now include a feature to expand and collapse results with the number of hits per facet/sub-facet.
Expanded e-linking possibilities: When items have 856 links in the WorldCat master records, WorldCat Discovery will now display these links on the Availability tab. Much of this content is open access, providing users with instant access to these resources. Example sources of WorldCat master record 856 links include HathiTrust, Univted States Government Printing Office (GPO), Education Resources Information Center (ERIC), Internet Archive, Project Gutenberg, items added via the WorldCat Digital Collection Gateway and more.
OCLC is also introducing beta availability of the new WorldCat Discovery API (application programming interface), which provides access for libraries to search and find resources in both WorldCat and a central index of article and e-book metadata that represent the wide range of resources libraries provide to their users.
The WorldCat Discovery API exposes library collection data for items in WorldCat, including materials held by individual member libraries, consortia and libraries worldwide. Benefits include:
access to an ever growing collection of central index metadata for which OCLC has been granted rights;
linked Data response formats, so that library collections can speak the language preferred by the Web;
facet functionality, so that libraries can deliver a modern search experience with the ability to quickly drill down into search results; and
access to the latest data models, including entities.
“Providing data layer access to bibliographic information for entities like people, places, works and events fulfills a key piece of OCLC’s data strategy to help libraries evolve with the larger Web”, said Ted Fons, Executive Director of OCLC Data Services and WorldCat Quality Management. “Making this data available as an API means libraries can connect bibliographic information to data sources not traditionally even catalogued or curated by libraries – such as the Wikipedia data infrastructure. Through the API, libraries will be able to deliver content from their collections and context from across the Web to support their users’ research needs”.
The WorldCat Discovery API gives libraries the flexibility to use an OCLC-developed interface, create their own application or use the two in combination. The WorldCat Discovery API lets libraries rely on OCLC to manage the repetitive and resource-intensive tasks involved in keeping a local discovery index up to date. Library systems and development staff are then free to invest their time in other discovery projects, such as the creation of mobile apps, widgets and enhancement of current UXs to suit their unique needs.
Libraries can use the WorldCat Discovery API to extend an alternative discovery service such as VUFind or Blacklight to include WorldCat results, and as a building block alongside other APIs to create a total user discovery experience.
The WorldCat Discovery API is now available as a beta to a select number of libraries that subscribe to FirstSearch, WorldCat Local or WorldCat Discovery Services. Full availability to all eligible libraries and partners is expected in early 2015. Developers will find documentation and sample code libraries on the OCLC Developer Network site, as well as instructions for how to request access to the API.
WorldCat discovery services: http://www.oclc.org/en-USA/worldcat-discovery.html
WorldCat Discovery API at the Developer Network site: http://www.oclc.org/developer/develop/web-services/worldcat-discovery-api.en.html
Texas State Library and Archives Commission issues a white paper on discovery services
Libraries that have implemented a discovery service find the experience to be both challenging and rewarding. The Texas State Library and Archives Commission (TSLAC) contracted with Amigos Library Services to write a white paper to provide basic information concerning discovery services, as well as an overview of the major discovery service vendors. In addition to vendor information, the white paper covers what to look for in a discovery service, suggests best practices for implementing a discovery service and includes a checklist for evaluating your discovery service.
Discovery Services white paper (pdf format): https://www.tsl.texas.gov/sites/default/files/public/tslac/lot/TSLAC_WP_discovery__final_TSLAC_20140912.pdf
Success strategies for electronic content discovery and access: new white paper
A group of professionals from libraries, content providers and OCLC have published Success Strategies for Electronic Content Discovery and Access, a white paper that identifies data quality issues in the content supply chain and offers practical recommendations for improved usage, discovery and access of e-content in libraries.
Libraries strive to get the right resources in front of users where and when they need them. The E-Data Quality Working Group identified data quality issues in libraries’ electronic content, which directly affect users’ ability to find and use library resources. The library’s discovery and access systems play an important role in helping users sift through and access the large amount of electronically published content. However, users face a major barrier to discovery and access to these resources if the bibliographic metadata and holdings data are not of sufficient quality.
“The ability to generate value for published content depends on data quality”, said Suzanne Saskia Kemperman, OCLC Director, Business Development and Publisher Relations. “Effective discovery and easy access drive usage, which increases the value of the content to libraries”.
Success Strategies for Electronic Content Discovery and Access offers solutions for the efficient exchange of high-quality data among libraries, data suppliers and service providers, such as:
improve bibliographic metadata and holdings data;
synchronize bibliographic metadata and holdings data; and
use consistent data formats.
The white paper combines business and practical information with recommendations for the content supply chain to achieve successful content discovery and access.
“The authors of this white paper, the E-Data Quality Working Group, are representatives of libraries, data suppliers and service providers”, said Ms. Kemperman. “We recognize that all of us, as participants in the content supply chain, have a shared interest in improving content discovery and access for library users through better quality bibliographic metadata and holdings data. We also recognize that we have a shared responsibility to improve the quality of the data exchanged and to implement more effective data exchange workflows”.
Success Strategies for Electronic Content Discovery and Access was written by the E-Data Quality Working Group: Suzanne Saskia Kemperman, OCLC, Director, Business Development and Publisher Relations; Bill Brembeck, OCLC, Senior Product Analyst, Data Services and WorldCat Quality Management; Elizabeth W. Brown, Project MUSE, Manager, Publisher Relations; Alexandra de Lange-van Oosten, Elsevier, Head of Third-Party Platform Relations; Theodore Fons, OCLC, Executive Director, Data Services and WorldCat Quality Management; Catherine Giffi, Wiley, Director, Strategic Market Analysis; Noah Levin, Springer Science+Business Media, Metadata Manager; Alistair Morrison, Elsevier, Senior Product Manager, ScienceDirect; Carlen Ruschoff, University of Maryland, Director, Technical Services and Strategic Initiatives; Gregg A. Silvis, University of Delaware Library, Associate University Librarian, Information Technology and Digital Initiatives; and Jabin White, ITHAKA/JSTOR, Vice President, Content Management.
The E-Data Quality Working Group has scheduled several events to discuss the white paper, including a National Federation of Advanced Information Services (NFAIS) Webinar on October 23, “Improving Discovery and Access: Recommendations from the E-Data Quality Working Group”, and a discussion at the 2014 Charleston Conference on November 6, “Success Strategies for Content Discovery: A Cross-Industry White Paper”.
Download the white paper at: http://www.oclc.org/go/en/econtent-access.html
ProQuest collections now indexed and discoverable through Ex Libris Primo
ProQuest and Ex Libris® Group have announced that over 200 of ProQuest’s most widely used databases have been indexed in the Ex Libris Primo Central Index of scholarly electronic resources, making the content easily discoverable via the Ex Libris Primo discovery service.
Among the important ProQuest databases that are now available via Primo are ProQuest Central; ProQuest Dissertations & Theses Global; ABI/INFORM®; Black Studies Center; various historical digital collections, including Early English Books Online (EEBO); Periodicals Archive Online; Periodicals Index Online (PIO); and approximately 50 abstracting and indexing (A&I) databases, which join the growing list of subject indexes in Primo Central. In the coming months, ProQuest Congressional and other databases will also be accessible via Primo. ProQuest content is used in more than 26,000 institutions in over 150 countries around the world.
The collaboration between ProQuest and Ex Libris began earlier in 2014 with an agreement to index ProQuest full-text and A&I databases in Primo Central. These two leaders in the information and discovery markets also began exploring methods of integrating the ProQuest Summon discovery service more tightly with the Ex Libris Alma library management solution and Ex Libris Aleph® and Voyager® integrated library systems.
Jack Ammerman, assistant university librarian at Boston University Libraries – one of a select group of institutions that were granted prerelease access to the ProQuest databases – observed: “We are delighted that the ProQuest databases have become an integral part of the Libraries' discovery service. Now our users are readily able to discover and access full-text content from familiar ProQuest databases. These databases greatly enhance the value of Boston University Libraries Search for our faculty and students. We are confident that the use of these licensed resources will increase”.
Knut Anton Bøckman of the Royal Library of Denmark noted: “Having ProQuest resources searchable along with the other sources in Primo Central significantly extends the usability of the one-stop search solution for students. This expansion of Primo Central is an important step for libraries that want to design the discovery options for resources that they subscribe to, the better to fulfill the diversity of user needs”.
“ProQuest and Ex Libris are acting on a shared vision to support seamless research for our mutual customers and their users”, said Allan Lu, ProQuest vice-president, Research Tools, Services and Platforms. “This is a great first step in an extensive collaboration that promises to make the key services of each of the companies work better together, improving the user experience and supporting customer choice”.
“We are thrilled with the results of the cooperation between the two companies”, remarked Shlomi Kringel, vice president of discovery and delivery solutions at Ex Libris. “The close collaboration provides the scholarly community with clear benefits and streamlines the discovery and delivery processes for millions of students and researchers at thousands of Primo institutions worldwide”.
Ex Libris Primo Central: http://www.exlibrisgroup.com/category/PrimoCentral
Comments sought on standards for distance learning library services draft revision
The Association of College and Research Libraries (ACRL) Distance Learning Section (DLS) Standards Committee has prepared a draft revision of the 2008 Standards for Distance Learning Library Services and is seeking comments before completing final revisions and submitting the standards for approval. Among the changes in the revision are updated definitions for key concepts such as “computer literacy” and “embedded librarian”, and new material addressing the challenges of Massive Open Online Courses (MOOCs).
The draft of the standard is available on the section website. Comments may be submitted through the website or directly to DLS Standards Committee Chair Harvey Gover (mailto:firstname.lastname@example.org) no later than November 1, 2014.
Standards for Distance Learning Library Services (2008): http://www.ala.org/acrl/standards/guidelinesdistancelearning
Standards for distance learning library services draft revision available at:
Issue 1 of Weave: Journal of Library User Experience is now online
The first issue of Weave: Journal of Library User Experience is now out. Weave is a peer-reviewed open access Web-based publication hosted by Michigan Publishing, a division of the University of Michigan Library. Weave features articles on UX design for librarians and professionals in related fields. The inaugural issue includes articles on A/B testing and UX research for small staffs, as well as reviews and interviews of interest to library UX professionals.
The editors of Weave have issued a call for papers for issue 2, to be published in spring 2015. The editors are looking for looking for two kinds of work:
1. Full-length scholarly articles of relevance to UX in libraries. Weave’s editors are interested in publishing innovative and cutting edge research, practical applications and their implications, ideas and speculation about future directions for UX, and reviews of books and publications of interest to UX professionals.
2. The Dialog Box, a new kind of review section. Weave’s Dialog Box aims to extend beyond the traditional book review section and feature critical dialog not only with books but with other media that set the boundaries of library UX. The editors of Weave are open to dialogs features taking different forms. Beyond the traditional book review, these might include: Symposia of short contributions focused around a single question or “artifact” relevant to library UX; review essays connecting juxtaposing 2-3 artifacts of library UX; or other formats that bring new insight to existing conversations.
Read Weave issue 1: http://weaveux.org/
Submit an article, or pitch an idea for the dialog box at: http://weaveux.org/submit
Swets information services B.V. Files for bankruptcy
From a press release posted on the Swets Web site:
Following the suspension of payment for Swets & Zeitlinger Group BV, granted by the court in Amsterdam on September 19, the Management team, together with Mr. J.L.M. Groenewegen (CMS), the appointed administrator, have been working tirelessly to investigate alternatives for the business. Unfortunately, concrete alternatives to sell the business as a whole have not materialized and due to these developments Swets Information Services B.V. filed for bankruptcy on September 23, 2014. The court of Amsterdam honored this request and as per September 23, 2014, Swets Information Services B.V. is declared bankrupt.
Mr. J.L.M. Groenewegen has now been appointed as trustee and the court of Amsterdam has also granted a cooling-off period (afkoelingsperiode) for a duration of two months. Swets Information Services B.V. has approximately 110 employees in The Netherlands. As a consequence of the bankruptcy, the employment contracts with these employees will be terminated.
The bankruptcy of the Swets Information Services B.V. does (for now) not affect its (foreign) subsidiaries as the bankruptcy is only related to Swets Information Services B.V. If and in which way the bankruptcy of Swets Information Services B.V. will affect its branches is currently under investigation by the trustee at this moment.
Read the full press release at:
Additional information at: