Citation
Hanson, H. and Stewart-Marshall, Z. (2015), "New & Noteworthy", Library Hi Tech News, Vol. 32 No. 1. https://doi.org/10.1108/LHTN-01-2015-0003
Publisher
:Emerald Group Publishing Limited
New & Noteworthy
Article Type: New & noteworthy From: Library Hi Tech News, Volume 32, Issue 1
NISO starts new standards development projects in new forms of assessing impact & altmetrics
The voting members of the National Information Standards Organization (NISO) have approved four new projects to develop standards for alternative assessment metrics (altmetrics). The NISO Alternative Assessment Metrics Initiative was begun in July 2013 with funding from the Alfred P. Sloan Foundation with a goal of building trust and adoption in new methods of assessing impact. Phase 1 of the project, which was completed this summer, gathered a large array of relevant stakeholder groups to identify what areas of alternative metrics would benefit most from standards-related developments. This input was distilled into a white paper published in June 2014, which was then presented to the NISO community to prioritize the action items as possible NISO work items. Phase 2 of the project will be to develop standards or recommended practices in the prioritized areas of definitions, calculation methodologies, improvement of data quality and use of persistent identifiers in alternative metrics. As part of each project, relevant use cases and how they apply to different stakeholder groups will be developed.
“Assessment of scholarship is a critical component of the research process, impacting everything from which projects get funded to who gains promotion and tenure, and which publications gain prominence in their fields of inquiry”, explains Martin Fenner, Technical Lead, PLOS Article Level Metrics, and Chair of the NISO Alternative Metrics Initiative Steering Committee. “However, traditional metrics that have been primarily based on print processes are failing to keep pace with both the expanded range of research outputs produced by scholars, and the diverse usage of these research outputs in scholarly communication that is increasingly purely electronic. Altmetrics are increasingly being used and discussed as an expansion of the tools available for measuring the scholarly and social impact of research. For altmetrics to move out of its current pilot or proof-of-concept phase, we need to develop commonly used definitions and guidelines for appropriate collection and reporting of data, so that organizations who wish to utilize these metrics can adequately understand them and ensure their consistent application and meaning across the community.”
“The NISO Alternative Assessment Steering Committee will oversee several working groups that will be formed to develop the identified standards and recommended practices”, states Nettie Lagace, NISO Associate Director for Programs. “For participation on these working groups, we are seeking interested participants from all the affected stakeholders including libraries, scholarly publishers, research funders (governmental and non-governmental), scholars, university departments of academic affairs, providers of alternative metrics data, and system providers who incorporate different elements of alternative metrics in their services.”
“We expect this initiative will continue to be broadly inclusive, with contributions from a diverse set of voices, who will be reliant on these new metrics and resulting tools”, said Todd Carpenter, NISO Executive Director. “In addition to the working group members, we also will seek broader community feedback through stakeholder interest groups. In addition, draft documents will be made available for public comment and/or trial use before finalization and publication. NISO will also schedule public webinars for further discussion and training during the development process.”
Anyone interested in participating on one of the initiative’s working groups should use the online contact form (http://www.niso.org/contact/) and indicate in which of the four activity area(s) you are interested.
The approved proposal for the Phase 2 projects, as well as the Phase 1 White Paper, is available on the NISO website at: http://www.niso.org/topics/tl/altmetrics_initiative/
California Library Consortium using Intota Assessment for collection usage analysis
Libraries worldwide are transforming their spaces to better align with the changing needs of their communities. The Statewide California Electronic Library Consortium (SCELC) has initiated a collection analysis project to gain insight on collection overlap and usage as a basis for establishing a sustainable shared collections program among its members. SCELC chose ProQuest Intota™ Assessment for comprehensive analysis of more than 4.5 million monographs across nine member libraries participating in the pilot project.
Outcomes from this multi-phase effort will shape the development of a formal program that will allow libraries to plan for responsible reduction in the number of print monographs in their collections, which ultimately will create space in libraries for collaborative learning and other purposes. The analysis will identify unique and prevalent copies so that libraries may determine which resources will be preserved and shared among libraries.
“From our competitive evaluation we determined that Intota Assessment was the best fit for our requirements. This program will maximize shelf space and the collections budget for member libraries, while maintaining student access to materials”, said Rick Burke, Executive Director, SCELC. “Intota Assessment enables us to consolidate and share our collections based on real evidence to inform our decisions. The analytics and reporting capabilities give us a much broader, deeper view of usage and enable us to be more efficient in the process.”
Overlap analysis and circulation velocity are examples of the metrics the libraries are using to evaluate their print collections within the peer group and to external points such as OCLC WorldCat and Resources for College Libraries™. Intota Assessment reports provide views into the library’s data which previously was very difficult if not impossible to achieve.
Established in 1986 to develop resource-sharing relationships among private academic institutions across California, SCELC membership has grown to more than 110 libraries. Member libraries in the pilot are Claremont University Consortium, Holy Names University, Loyola Marymount University, Mount St. Mary’s College, Pepperdine University, St. Mary’s College of California, University of Redlands, University of San Diego and the University of San Francisco.
Intota Assessment: http://www.proquest.com/products-services/intota-assessment.html
Project on assessing use and impact of digital repositories receives IMLS grant
A grant from the Institute of Museum and Library Services (IMLS) will enable Montana State University (MSU) Library faculty and administrators to perform research for a project known as “Measuring Up: Assessing Use of Digital Repositories and the Resulting Impact”, With the funds, MSU – in partnership with OCLC Research, the Association of Research Libraries and the University of New Mexico – will investigate the difficulties libraries face in documenting and evaluating the use of their digital repositories through Web analytics.
Libraries routinely collect statistics on the use of digital collections for assessment and evaluation purposes, according to MSU Library Dean Kenning Arlitsch. Libraries then report those statistics to a variety of stakeholders, including the libraries’ own institutions, professional organizations and funding agencies. However, differing counting methods and other difficulties can result in inaccurate statistics, Arlitsch said. “Inaccurate statistics lead to a variance in numbers across the profession that makes it difficult to draw conclusions”, Arlitsch said. “The inaccuracy runs in both directions, with under-reporting numbers as much of a problem as over-reporting.”
To address the issue, Arlitsch and his colleagues from the partner institutions involved with the grant – known as a National Leadership Grants for Libraries – will examine the difficulties libraries face in producing accurate reports, as well as recommend best practices that will help improve the accuracy and consistency of the reports. The team will also examine how to assess the impact of institutions’ digital repositories on the citation rates of academic papers. Citation rates are important, Arlitsch said, because the number of citations often has a direct effect on university rankings and other performance indicators.
“In theory, if you can raise citation rates, you should be able to raise university rankings”, Arlitsch said.
OCLC Research staff will collaborate with project partners to develop and evaluate new models of institutional repositories that are more visible to Internet search engines and more consistent across collections within and between libraries. Senior Research Scientist Jean Godby and Senior Program Officers Ricky Erway and Roy Tennant will serve on an advisory panel for the project. In addition, Research Support Specialist Jeff Mixter will spend 50 per cent of his time for the next three years designing, developing and testing the models for the project.
More about the project at OCLC Research: http://oclc.org/research/news/2014/10-21.html
Open Preservation Foundation launches new strategy, brand and website
In October 2014, the Open Planets Foundation announced a change in the company name to the Open Preservation Foundation (OPF). The name change reflects the foundation’s core purpose and vision in the field of digital preservation while retaining its widely known acronym, OPF.
“The history behind the old name refers to the Planets Project, an EU-funded digital preservation project which closed in 2010”, explained Dr Ross King, Chair of the OPF. “The Open Planets Foundation was established to sustain the results from the project.”
The OPF has launched its new strategy for 2015-2018. It comprises three core strands: Technology; Knowledge; and Advocacy and Alliances.
“This is a new chapter for the Foundation”, explains Dr King. “The strategic plan has been adopted by the Board of Directors after consultation with the membership and we believe it underpins the OPF’s vision to provide shared solutions for effective and efficient digital preservation.”
OPF has also unveiled a new website designed to provide a better user experience with improved navigation and functionality, following design and testing involving OPF members and the digital preservation community.
“The new website supports our strategy which sets out our priorities and describes the benefits we provide to our members, and the open digital preservation community”, adds Ed Fay, Executive Director of OPF. “We recently changed our name to the Open Preservation Foundation, and we are excited to reveal our new brand as we launch the website.”
The Knowledge section includes a blog which attracts international contributions on digital preservation case studies and tool development, as well as providing access to latest news and a community events calendar. OPF has introduced new content for online training and interest groups which will be further developed over time.
The Technology section sets out the OPF’s open-source principles and practices, and showcases its digital preservation products and services. OPF has also added a selection of corpora for testing tools to evaluate their performance.
Website users can learn more about OPF projects and meet the members, board and staff in the Our Organisation section, and download the strategic plan 2015-2018 and annual reports.
Users with existing accounts will be asked to reset their password when first logging in to the new website, but they will be able to add new content once this is complete.
Open Preservation Foundation: http://openpreservation.org/
Download the strategic plan 2015-2018: http://openpreservation.org/documents/public/OPF_VisionandStrategy_2015-18.pdf
The Data Harvest: report & recommendations from the Research Data Alliance
In 2013, the EU, USA and Australian governments launched one of the most significant ventures in the quest for global data-sharing: The Research Data Alliance (RDA). Its aim: to promote the international cooperation and infrastructure that scientific data sharing will require. It now has over 2,350 members from 96 countries. RDA’s vision is of researchers and innovators openly sharing data across technologies, disciplines and countries to address these and other grand challenges of society. RDA’s mission is to build the social and technical bridges that enable data sharing, accomplished through the creation, adoption and use of the social, organisational and technical infrastructure needed to reduce barriers to data sharing and exchange. Scientists and researchers join forces with technical experts in focused Working Groups and exploratory Interest Groups. Membership is free and open to all on http://www.rd-alliance.org.
In October 2010, the European Union High Level Group on Scientific Data presented the report, Riding the Wave, to the European Commission, outlining a series of policy recommendations on how Europe could gain from the rising tide of scientific data. Over four years later, a team of European experts have generated a new report “The Data Harvest: How sharing research data can yield knowledge, jobs and growth” with an update on the landscape described in the previous report aiming to sound a warning on how Europe must act now to secure its standing in future data markets. This report, released on 3 December 2014, outlines the benefits and challenges, and offer recommendations to European policymakers.
In the new report, the European members of the Research Data Alliance propose the following actions:
DO require a data plan, and show it is being implemented. We want a system to let researchers around the globe gather, store and manage, share, re-use, re-interpret and act upon each others’ data.
DO promote data literacy across society, from researcher to citizen. Embracing these new possibilities requires training and cultural education – inside and outside universities. Data science must be promoted as an important field in its own right.
DO develop incentives and grants for data sharing (and don’t forget Horizon 2020). Few people will act without incentives – whether direct grants from EU programmes, or indirect market incentives to private investors. For Horizon 2020, the upcoming Work Programme for 2016-2017 should reflect the growing importance of data sharing – in funding for experiments, business models, communities and analysis.
DO develop tools and policies to build trust and data-sharing. Perhaps the biggest challenge in sharing data is trust: How do you create a system robust enough for scientists to trust that, if they share, their data won’t be lost, garbled, stolen or misused? In the end, it is the culture of science that we are talking about, and that will take a generation to change.
DO support international collaboration. The biggest benefits will come from cross-fertilisation with other disciplines, regions, cultures and economic systems. The Research Data Alliance, with its 96-country membership, exemplifies the kind of global coordination that will be needed.
DON’T regulate what we don’t yet understand. Sharing scientific data on this scale is new; we don’t know yet what opportunities will arise, or what problems will dog us. Issues such as privacy and ethics should be handled in consultation with the wider data and scientific community.
DON’T stop what has begun well. Much effort, expense and brainpower, across the EU, has been invested in making data sharing a reality.
Download “The Data Harvest”: http://europe.rd-alliance.org/sites/default/files/report/TheDataHarvestReport_%20Final.pdf
Government-University-Industry Research Roundtable on big data
The Government-University-Industry Research Roundtable (GUIRR) of the USA National Academies held a two-day discussion, October 14-15, 2014, on the implications of big data for research. This meeting, entitled “The Big Data Revolution: What Does It Mean for Research?”, focused on how big data is changing the way we do science, looking particularly at the challenges of working with and analyzing big data sets, ethics and privacy concerns, workforce and training issues and opportunities for public–private partnerships. While making acknowledgment of the technical side of big data, the discussions focused primarily on the role and impact of big data in scholarly research.
Information on the program, including slides from several of the presentations, can be found at: http://sites.nationalacademies.org/PGA/guirr/PGA_152479
EIFL (Electronic Information for Libraries) launches new website
EIFL (Electronic Information for Libraries) is a not-for-profit organization that works with libraries to enable access to knowledge in developing and transition economy countries in Africa, Asia Pacific, Europe and Latin America. On December 15, 2014, EIFL launched its brand-new website. With a modern design and enhanced functionality, the new website provides users with an intuitive and engaging browsing experience.
The new site is geared towards EIFL’s global audience of librarians, researchers, publishers, funders, policymakers and students, with easy-to-navigate and mobile-optimized pages. “The concept and design of the new website appeals to our international audience and better reflects the important work being done by EIFL’s team around the world”, says Simona Siad, EIFL’s Communications Manager. “The website is often our first point of contact with our audience, and as such our new site will play a key role in future communications and outreach strategies.”
The new http://eifl.net was designed by Gregoire Vella, a user interface designer, and developed by a team of developers from Gai Technologies, a boutique development company. The new site continues to use Drupal as a content management system.
The website is fully responsive, which means you can easily access the website from your smartphone or tablet. The fresh, contemporary and unified design across all platforms follows EIFL’s enhanced visual identity, and there is also a text-only version of the site for users living in low-bandwidth areas and countries.
Among the new features of the site: a new resources gallery with the latest EIFL reports, articles, case studies, webinars and guides, which allows users to filter by content and region to be able to find the resources they need quickly; an interactive map with updated country pages showcasing EIFL partner library consortia; and updated programme landing pages (Licensing, Copyright and Libraries, Open Access and Public Library Innovation), which showcase videos, information and photos at the programmatic level.
EIFL’s new website was built to be accessible for people with disabilities and follows the Web Content Accessibility Guidelines (WCAG) 2.0. User testing was also conducted to ensure the usability of the site. All site content, except where otherwise noted, is licenced under a Creative Commons Attribution (CC BY) licence.
Explore the new EIFL site at: http://www.eifl.net/
K|N Consultants to launch Open Access Network; seeking funding
K|N Consultants is a not-for-profit 501(c)(3) organization that provides strategic and operational guidance via a range of consultation services to academic institutions and organizations; academic, national and public libraries; learned and scholarly societies; scholarly publishers and university presses; government agencies; private foundations; and other mission-driven organizations struggling to adjust to the rapid change in higher education.
K|N’s consultation services are focused on improvements to and efficiencies in all stages of the research and teaching life cycle within higher education – facilitating strategic planning and identifying campus-wide solutions to researchers’ and educators’ needs, promoting collective infrastructure development and encouraging and enabling partnerships within and across organizations.
K|N is seeking funding support to help launch the Open Access Network (OAN). The classic definition of open access (OA) remains that of the Budapest Open Access Initiative, drafted at a meeting held in Budapest in December 2001: “free and unrestricted online availability [of the scholarly literature] […] permitting any user to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the Internet itself”.
Since that definition was offered 13 years ago, OA has taken hold in the scientific community, but still focuses on articles, rather than the entire scholarly output of all those participating in the research endeavor, no matter what the format of communication, no matter what the discipline, no matter what size or type the institution. Still more needs to be done to realize the full promise and potential of OA.
The OAN, being launched in early 2015, looks to tackle head-on the challenge of OA business models for disciplines in the humanities and social sciences. The OAN provides a broad and transformative solution for sustainable OA publishing and archiving that is complementary, not competitive, with other OA funding approaches. The OAN model, as spelled out in K|N’s white paper, A Scalable and Sustainable Approach to Open Access Publishing and Archiving for Humanities and Social Sciences, proposes that all institutions of higher education contribute to systemic support of the research process itself, including its scholarly output. It is a bold rethinking of the economics of OA by way of partnerships among scholarly societies, academic libraries and publishers funded by an institutional fee structure based on a student-and-faculty per-capita sliding scale. Core to the model is its insistence on broad institutional support of the scholarly communication infrastructure itself, not on any particular format (e.g. books, journals, website).
K|N Consultants will launch the OAN in early 2015 and are beginning now to seek funding that will support a series of pilot projects designed to test the model.
For more information and/or to donate: http://knconsultants.org/help-us-launch-the-open-access-network/
White paper: http://knconsultants.org/wp-content/uploads/2014/01/OA_Proposal_White_Paper_Final.pdf
NISO establishes Open Discovery Initiative Standing Committee
The National Information Standards Organization (NISO) has announced the next phase for the Open Discovery Initiative (ODI), a project that explores community interactions in the realm of indexed discovery services. Following the working group’s recommendation to create an ongoing standing committee as outlined in the published recommended practice, Open Discovery Initiative: Promoting Transparency in Discovery (NISO RP-19-2014), NISO has formed a new standing committee reflecting a balance of stakeholders, with member representation from content providers, discovery providers and libraries. The ODI Standing Committee will promote education about adoption of the ODI Recommended Practice, provide support for content providers and discovery providers during adoption, conduct a forum for ongoing discussion related to all aspects of discovery platforms for all stakeholders and determine timing for additional actions that were outlined in the recommended practice.
“Discovery systems are critical to the research ecosystem”, states Laura Morse, ODI Standing Committee Co-chair and Director, Library Systems, Harvard University. “Working with content and discovery providers to ensure that all content, whether it is licensed or openly available, can be discovered by library users regardless of the institution’s choice of discovery system is core to supporting research, teaching, and learning. The ODI Standing Committee will build on the work of the original ODI Working Group to promote content neutrality and the widespread adoption of all tenets of the recommended practice by discovery service providers, content providers, and libraries.”
“The ODI Recommended Practice provides a rich framework within which content providers and discovery service suppliers can drive collaborative improvements toward a smooth and comprehensive library search experience”, states Lettie Conrad, ODI Standing Committee Co-chair and Executive Manager, Online Products, SAGE. “We must work together across the industry to fully realize the vision of indexed discovery services, which is made possible by NISO’s leadership and guidance through the standards formation process. The Standing Committee invites suggestions from the community on how we can best promote and enable adoption of the NISO ODI Recommended Practice.”
“Uptake of NISO’s recommendations is always aided when community members are willing to continue working together as a Standing Committee”, explains Nettie Lagace, Associate Director for Programs at NISO. “As stakeholders utilize the NISO documents and discuss potential areas of further work, the benefits of relying on a group of their peers to educate them and provide support cannot be underestimated. NISO is grateful to the members of the Standing Committee for contributing their time to these ongoing efforts.”
More information about the ODI Standing Committee and the Open Discovery Initiative: Promoting Transparency in Discovery (NISO RP-19-2014) recommended practice is available from the Open Discovery Initiative webpage on the NISO website at: http://www.niso.org/workrooms/odi/
PERICLES Extraction Tool released: open source software for metadata extraction
PERICLES is a four-year project that aims to address the challenges of ensuring that digital content remains accessible and understandable over time. A recent outcome of the PERICLES project is the release of the PERICLES Extraction Tool.
The PERICLES Extraction Tool (PET) is open-source (Apache 2 licensed) Java software for the extraction of significant information from the environment where digital objects are created and modified. This information supports object use and reuse, e.g. for a better long-term preservation of data. The tool was developed entirely for the PERICLES EU project by Fabio Corubolo, University of Liverpool, and Anna Eggers, Göttingen State and University Library.
PET is prototype research software, and as such it has limitations in its usability and it has not undergone extensive testing; don’t expect production quality release software. It will require some careful configuration, but still it is working software and can produce useful and novel results.
In a nutshell, PET works by analysing the use of the data from within the creator or consumer environment, extracting information useful for the later reuse of the data that is not possible to derive in later phases of the data life cycle, as, for example, at ingest time. It works based on sheer curation principles, but has no remote functionality, so the environments user has full control of which information to extract and to keep. The tool works by analysing both files, and their changes, and the system environment.
The tool is generic. It can be adapted for various scenarios, as it provides a plug-in structure for the integration of use-specific extraction algorithms.
Various information extraction techniques are implemented as plug-in extraction modules, as complete implementations or where possible by re-using already existing external tools and libraries. Environment monitoring is supported by specialized monitoring daemons and continuous extraction of relevant information triggered by environment events related to the creation and alteration of digital objects, like, e.g., the alteration of an observed file or directory, opening or closing a specific file and other system calls. A snapshot extraction mode exists for capturing the current state of the environment, which is mainly designed to extract information that doesn’t change frequently, as, e.g., system resource specifications.
An advantage of the use of PET is that established information extraction tools can be integrated as modules. A user who had to use many different tools to extract the information and metadata for a scenario, could use our tool as framework instead, and will get all required information in one standard format (JSON, or XML) saved in a selectable storage interface. Furthermore, this approach enables the possibility to enrich the established set of information with additional information extracted by other PET modules.
PET is open source and now available on GitHub, with documentation and tutorials. The theoretical background can be explored in detail through the associated paper published at iPRES 2014, A pragmatic approach to significant environment information collection to support object reuse. You can also visit a blog briefly summarising PET which you are welcome to read and where you can also leave comments.
PERICLES Extraction Tool (PET) on GitHub: https://github.com/pericles-project/pet
Background paper, “A pragmatic approach to significant environment information collection to support object reuse”: http://pericles-project.eu/uploads/files/ipres2014_PET.pdf
Pericles BLOG presenting PET: http://goo.gl/wmhuhA
PERICLES EU Project: http://www.pericles-project.eu/
Registering researchers in authority files: report from OCLC Research
Written by OCLC Research Program Officer Karen Smith-Yoshimura and the 13 members of the Registering Researchers in Authority Files Task Group comprising specialists from the USA, UK and The Netherlands, this report summarizes their research into approaches to providing authoritative researcher identifiers.
Registering researchers in some type of authority file or identifier system has become more compelling, as both institutions and researchers recognize the need to compile their scholarly output. The report presents functional requirements and recommendations for six stakeholders: researchers, funders, university administrators, librarians, identity management systems and aggregators (including publishers). It also provides an overview of the researcher identifier landscape, changes in the field, emerging trends and opportunities.
Key highlights:
While funders and publishers have been adopting researcher identifiers, it is equally important for research institutions and libraries to recognize that “authors are not strings” and that persistent identifiers are needed to link authors to their scholarly output.
Although there are overlaps among identifier systems, no one system will ever include all researchers or meet all functional requirements, so the ability to communicate among systems becomes crucial.
New modes of scholarly communication increase the need to rely on persistent researcher identifiers to attribute output to the correct researcher and the researcher’s institution.
Funders are finding that persistent identifiers are important to for efficient and scalable tracking of the impact of the research they support.
Although interoperability between systems is increasing, approaches used in different identifier systems for formats and data elements are often not interoperable.
There is a huge opportunity for third-party reconciliation or resolution services to provide linking among different identifier systems.
Supplementary data sets document the task group’s research and are also available for downloading: 18 use-case scenarios for the six stakeholders; functional requirements derived from the use-case scenarios; the list of 100 research networking and identifier systems the task group considered; characteristics profiles of 20 research networking and identifier systems; mappings of each of the 20 systems to the functional requirements; and a researcher identifier information flow diagram.
This report and its supplementary data sets will be of interest to everyone who has a stake in identifying the research output of individual authors and institutions.
Download the report and supplementary documents from: http://www.oclc.org/research/publications/library/2014/oclcresearch-registering-researchers-2014-overview.html
NISO releases draft recommended practice on exchanging serial content
The National Information Standards Organization (NISO) has released a draft recommended practice Protocol for Exchanging Serial Content (PESC), NISO RP-23-201x. This Recommended Practice was developed to provide guidance on the best way to manage all of the elements of digital serial content packaging in a manner that aids both the content provider and the content recipient in understanding what has been delivered and received.
Serial publications represent a diverse content space ranging from popular magazines to scholarly journals, from content that is image-based to content that is text-based, from publications that have new content daily (even hourly) to those that might have new content only every few years.
“As part of their missions, many different organizations – libraries, archives, indexing services, content aggregators, publishers, and content creators – need to exchange and work with digital files that make up serial content”, states Leslie Johnston, Director of Digital Preservation, National Archives and Records Administration, and Co-chair of the NISO PESC Working Group. “When digital serial content is exchanged, the files that comprise a serial ‘publication’ are packaged together in some manner and these packages can be highly variable. Currently, there is no standardized packaging format that addresses the level of specificity and granularity needed and the PESC Recommended Practice was developed to fill this gap.”
“The recommendations in this document describe preferred practices for the packaging and exchange of serial content to enable the automation of processes to receive and manage serial content at scale”, explains Kimberly A. Tryka, Research Data Librarian, National Institute of Standards & Technology (NIST), and Co-chair of the NISO PESC Working Group. “By following these practices, organizations can make it clear what content has been transmitted, how it is organized, and what processing is required when a new package is received.”
Current status: The Draft PESC Recommended Practice was available for public comment through December 5, 2014. The Working Group is currently reviewing the comments and any next steps they may mean for the Recommended Practice.
Packaging is of concern to any organization that needs to exchange files. Currently, there is no standardized packaging format that addresses the level of specificity and granularity needed. The PESC Working Group plans to create a recommendation for a set of guidelines – a protocol – that would define the rules to be used to create a package of serial content. Such a protocol is required for not only interchange of content, but for the automation of processes to receive and manage serial content at scale. Use of the protocol would make it obvious what content has been transmitted and how it is organized and inform what processing is required when a new package is received.
The Working Group will need to pursue two related activities concurrently, as decisions made and information learned in each activity will inform the other:
1. Determine the specifications of a packaging recommendation that can be used for archiving and exchanging digital files related to periodic publications. The Working Group will need to define the format of, and information to be included in, a manifest that describes a group of digital files related to a serial and that can be used to ensure the integrity of the file group. Issues of scope will also need to be addressed in initial discussions; for example, should the recommended practice comment on what is an appropriate level of exchange (single article? single volume? single issue? multiples?) and/or should this recommended practice prescribe any type of directory or folder structure or identifiers?
2. Examine current practice and see what strategies are currently used in the community.
Depending on the results of this exploration, the Working Group might find that it is possible to either adopt a current method wholesale, or to make changes to a current method to better address the specific needs of serial content, rather than creating a new method from scratch.
Protocol for Exchanging Serial Content (PESC): http://www.niso.org/workrooms/pesc/
Taylor & Francis compiles white paper on use of social media by the library
Social media has the potential to facilitate much closer relationships between libraries and their patrons, wherever they are based, and however they choose to access library services and resources.
A new white paper, Use of social media by the library: current practices and future opportunities, reports that 88 per cent of librarians think that social media will become more important in the future, and poses the questions: Do you think that librarians will need to learn to become PR experts, or will social media be a passing phase? How are librarians navigating the ever-changing digital climate whilst also continuing to provide excellent service provision in the more traditional library setting?
Taylor & Francis is seeking to address these questions, and more, with the release of their white paper on social media in the library which examines current practices from a world-wide perspective. The white paper includes current use statistics and numerous case studies against which libraries can benchmark their own social media activity and be inspired to try new approaches.
The white paper is informed by research carried out internationally, comprising an online survey, focus groups, tele-interviews and a Twitter party, involving over 600 librarians worldwide.
Some of the most interesting discoveries include:
Facebook is the most popular social media channel, with 58 per cent of librarians using it regularly.
64 per cent of librarians find it challenging to strike a balance between setting a formal/informal and engaging tone in their online posts.
75 per cent of librarians post on an ad hoc basis, rather than scheduling in advance.
73 per cent believe more roles dedicated to social media will appear in the library in the future.
The research findings also make clear that social media offers many opportunities for librarians, but with that also comes many challenges.
For a closer look at the white paper; infographic highlights; and full supporting research including top-level data, a copy of the survey and further analysis: http://www.tandf.co.uk/libsite/whitePapers/socialMedia/
Axiell partners with SOLUS to provide innovative mobile library apps
Axiell, developer of advanced and innovative technical solutions for libraries, archives and museums, has formed a unique partnership with SOLUS, which will bring enhanced mobile benefits to Axiell library customers and will provide library patrons with a state-of-the-art, platform-independent mobile library app.
With the app, released in December 2014, library patrons at Axiell customer sites will be able to:
view their library account on mobile devices such as Android, Windows and Apple phones or tablet devices;
benefit from a simple one-click renewal of items including real-time synchronisation with the library management system;
perform advanced searches on the library catalogue for both physical and digital media;
see a full breakdown of item location showing consortia, branch availability and location; and
benefit from the built-in ability to download digital media such as ebooks and audio books to their mobile device through Axiell eHUB’s integration with digital content providers.
Axiell serves libraries, schools, archives, museums and authorities with technically advanced and innovative solutions developed in close cooperation with its customers. More than 1,000 library organisations with thousands of branches use an Axiell library management system and Axiell Arena, a tool for the virtual library.
Grant Palmer, Managing Director at Axiell UK Ltd commented, “We at Axiell are delighted to be working with the team at SOLUS, their approach to innovation, ingenuity and flair complements that already in place at Axiell and this partnership will have significant benefits for our customers. We already have a ‘go create’ attitude together and are excited by what we will offer to the libraries sector across a broad spectrum of agile, lean and very effective solutions which deliver a unique and market-leading approach to digital libraries”.
Neil Wishart, Director at SOLUS, commented, “We are delighted to have formally agreed this partnership with Axiell that will see SOLUS deliver mobile apps for Axiell UK’s library customers. Patron engagement for libraries is at the heart of everything we do at SOLUS and we believe that the best way to do this is through innovative product and software development that delivers a ‘digital experience’ for library users”.
The SOLUS mobile app takes advantage of the Axiell Arena functionality through Axiell’s Integrations Framework (a set of APIs (application programming interfaces) exposing core Axiell functionality). Through this arrangement, Axiell customers have access to a set of state-of-the-art mobile library apps to run on the leading mobile platforms (Android, iOS and Windows).
To find out more about Axiell, visit: http://www.axiell.co.uk/ or http://www.axiell.com/
Coalition for Networked Information 2014-2015 Program Plan, videos available
At the Coalition for Networked Information (CNI) Fall Membership Meeting, December 8-9, 2014, in Washington, DC, CNI released its new 2014-2015 Program Plan, surveying areas that the organization intends to focus on during this program year. The program plan is now online.
CNI Director Clifford Lynch discussed the Program Plan as part of his “2014 in Review and 2015 in Prospect” closing plenary session at the meeting, focusing particularly on privacy and security, software preservation and migration and sustainability. Video of the talk, An Evolving Environment: Privacy, Security, Migration and Stewardship, is now available on CNI’s two video channels.
Presentation materials from the December 2014 meeting are now available from the respective project briefing pages on the meeting website (http://www.cni.org/mm/fall-2014/). Additional videos will be available soon and announced as they are ready.
CNI 2014-2015 Program Plan: http://www.cni.org/program/2014-2015/
Closing Plenary on YouTube: http://youtu.be/jDCJ9mNQGcE
Closing Plenary on Vimeo: http://vimeo.com/114426257
The Code4Lib Journal, issue 26 is now available
Issue 26 of the Code4Lib Journal offers a slate of articles that have valuable things to say to the library technology community. The issue includes two articles related to the challenges of web-scale archiving. Archiving the Web: A Case Study from the University of Victoria provides an overview of the tools for, and challenges of, archiving websites in the context of the University of Victoria’s experience. Technical Challenges in Developing Software to Collect Twitter Data explains how George Washington University handled the technical challenges of turning its application for collecting social media data from Twitter from a research prototype into something that is sustainable and useful for a wider audience.
Two articles in this issue are focused on using AngularJS. Exposing Library Services with AngularJS provides an introduction to the JavaScript framework AngularJS and introduces some AngularJS modules for accessing library services. Hacking Summon 2.0 the Elegant Way shows how to reverse-engineer the AngularJS-based Summon 2.0 interface and explains how to use this knowledge to customize the interface.
Parsing and Matching Dates in VIAF describes the challenges of interpreting the numerous ways of expressing dates found in authority records from various sources (e.g. the Virtual International Authority File [VIAF]) so they can be standardized and compared. Mdmap: A Tool for Metadata Collection and Matching looks at a novel way of generating metadata to describe digitized books by evaluating and merging existing metadata that are gathered from many sources.
Using Zapier with Trello for Electronic Resources Troubleshooting Workflow shows how these free tools have been used to create a more efficient and effective process for tracking and responding to e-resource access problems. Finally, Developing Applications in the Era of Cloud-based SaaS Library Systems uses Ex Libris’ Developer Network as an example of how vendors are addressing issues such as security, multi-tenancy, latency and analytics in the cloud environment.
Code4Lib Journal, Issue 26: http://journal.code4lib.org/issues/issues/issue26