New & Noteworthy

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 23 November 2012

226

Citation

(2012), "New & Noteworthy", Library Hi Tech News, Vol. 29 No. 10. https://doi.org/10.1108/lhtn.2012.23929jaa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2012, Emerald Group Publishing Limited


New & Noteworthy

Article Type: New & Noteworthy From: Library Hi Tech News, Volume 29, Issue 10

Minimum digitization capture recommendations for review and comment

In 2011, the Association for Library Collections and Technical Services Preservation and Reformatting Section charged a task force to develop guidelines for libraries digitizing content with the objective of producing digital products that will endure. The intent of this document was to build off past works. The authors reviewed previous research, practices at over 50 organizations, and samples of digitized works to determine a recommendation of minimum specifications for sustainable digitized content. The recommendations are not intended to dictate specific technical specifications at any given institution, but rather a floor that should not be dropped below. This draft was the result of the task force’s work. It is now up for general comment before it is published in its final version.

Anyone with an interest is invited to review the document and comment. The comment period will end on December 31, 2012. Comments will then be reviewed, incorporated into the document, and a final version will be published shortly thereafter.

Minimum digitization capture recommendations: http://connect.ala.org/node/185648

Designing storage architectures for digital collections

The Designing Storage Architectures (DSA) meeting held September 20-21, 2012 in Washington, DC, brought together technical and industry experts; Library of Congress IT and subject matter experts; government specialists with an interest in preservation; decision-makers from a wide range of organizations with digital preservation requirements; and recognized authorities and practitioners of digital preservation.

Sessions included a technology overview and presentation on the state of industry, community presentations, and three panel discussion sessions: how to store data over time: DSA for digital preservation; what is the future of magnetic tape; and the future of hierarchical storage management.

Materials from the meeting are available at: www.digitalpreservation.gov/meetings/storage12.html

PREMIS implementation fair presentations and notes available

The PREMIS Data Dictionary for Preservation Metadata is the international standard for metadata to support the preservation of digital objects and ensure their long-term usabaility. Developed by an international team of experts, PREMIS is implemented in digital preservation projects around the world, and support for PREMIS is incorporated into a number of commercial and open-source digital preservation tools and systems. The PREMIS Editorial Committee coordinates revisions and implementation of the standard, which consists of the Data Dictionary, an XML schema, and supporting documentation.

The PREMIS Implementation Fair and Preservation Health Check took place in Toronto in conjunction with the 95th Annual Conference on Digital Preservation (iPRES 2012) on October 2, 2012. Presentations and notes from the event are available at: www.loc.gov/standards/premis/premis-implementation-fair2012.html

Updated version of the Victorian women writers project released

The Indiana University Digital Library Program and Indiana University Libraries have announced the launch of an updated version of the Victorian Women Writers Project (VWWP).

The VWWP was begun in 1995 at Indiana University under the determined leadership and editorship of Perry Willett. The VWWP was celebrated early on for exposing lesser-known British women writers of the nineteenth century, writers whose popularity did not make the transition into the twentieth century or inclusion in a literary canon. Quiet since 2003, the VWWP is pleased to be back with an expanded purview that includes women writing in the nineteenth century in English beyond Britain. As before, the project will devote time and attention to the accuracy and completeness of the texts, as well as to their bibliographical descriptions. New texts, encoded according to the text encoding initiative (TEI) P5 Guidelines, will adopt principles of scholarly encoding, facilitating more sophisticated retrieval and analysis.

Since 2010, the VWWP has served as a pedagogical tool, imparting to English graduate and undergraduate students – at Indiana University and beyond – the critical and technical skills commonly employed by digital humanists. It has also served as a significant research tool to which the graduate and undergraduate students directly contribute. Their contributions include scholarly encoded texts, enhanced bibliographic access, and contextual materials such as scholarly annotations, introductions, and author biographies that shed further light on these little known women writers.

New features that are part of this release include: genre browse based on the Modern Language Association Thesaurus, an interactive timeline situating authors, publications and major events in historical context, and contextual materials authored by students.

Now available are approximately 20 encoded texts that were created as part of a new digital humanities seminar (ENG L501: Professional Scholarship in Literature: Digital Humanities Practicum) taught in the IU English department during the Fall 2010 semester under the fearless leadership of Professor Joss Marsh, also known as “the manager.” Monographs by Mrs M. Alexander, Mary Cholmondeley, Juliana Ewing, Fanny Kemble, and Anne Thackeray Ritchie are now part of the collection along with introductions to the these texts and biographies authored by the L501 graduate students. Since then, the VWWP editors have partnered with Judson College and Texas A & M for additional contributions to the project as part of their respective English courses and curricula.

The approximately 200 online texts that were originally part of the VWWP were produced by transcription and encoded in the Standard Generalized Markup Language (SGML) following the TEI Guidelines, version P3, using the TEI Lite DTD (version 1.6).

In an effort to bring the encoding up to date, the original SGML, TEI P3 files were transformed to XML, TEI P4. Aspects of the encoding that were not conducive to automatic mapping and transformation were updated manually. In 2009, the TEI P4 version of the files were transformed yet again to the most current version of the TEI P5, which also required manual intervention to address aspects lost in translation, and also to conform to the newly revised best practices for TEI in libraries. The VWWP texts now rely on a custom TEI P5 W3C schema.

The most recent version of the VWWP encoding guidelines, which now support richer, scholarly encoding and the encoding of related contextual materials can be found at the VWWP wiki portal (https://wiki.dlib.indiana.edu/display/vwwp/).

The delivery and discovery capabilities of this site are implemented using a customized version of the open source eXtensible Text Framework (XTF) developed by the California Digital Library. It is served using the Tomcat application server and Apache HTTP Server software.

Local customizations to XTF at Indiana University include a unique native page image viewer and a page turner that are both driven exclusively by the information encoded in the source TEI files and require no additional software beyond what can be accomplished by customizing XTF’s XSLT templates. This feature enables switching between text and page images at any time while navigating a document’s structure, and allows viewing of one or more page images as moveable overlays simultaneously with the text in the paged text mode. The actual page images are stored and delivered via the IU Digital Library Program’s Fedora repository.

The approximately 200 original texts that were part of the initial launch of the VWWP in 1996 are stored in a local e-text repository called Xubmit, which was developed by the IU Digital Library Program. Xubmit is comprised of web services using Java and Axis, the revision control system (RCS) for file versioning, and a graphical user interface developed in Java that is delivered using the Tomcat application server. Along with the XML/TEI P5 files, Xubmit stores the schema and schematron files for the project as well.

VWWP: www.dlib.indiana.edu/collections/vwwp

VWWP project information page: http://webapp1.dlib.indiana.edu/vwwp/projectinfo.do

MetPublications: Metropolitan Museum of Art launches new online resource

The Metropolitan Museum of Art has launched MetPublications, a major online resource that offers unparalleled in-depth access to the museum’s renowned print and online publications, covering art, art history, archaeology, conservation, and collecting. Beginning with nearly 650 titles published from 1964 to the present, this new addition to the Met’s web site will continue to expand and could eventually offer access to nearly all books, bulletins, and journals published by the Metropolitan Museum since its founding in 1870, as well as online publications.

Readers may also locate works of art from the Met’s collections that are included within MetPublications and access the most recent information about these works in the collections section of the museum’s web site.

“MetPublications presents a rich and fascinating record of the last five decades of Met scholarship,” said Thomas P. Campbell, Director and CEO of the Metropolitan Museum:

I am particularly pleased that this new portal allows us to share the Met’s publications with a global audience. It will extend the reach of our past, current, and future publications, and give new life to out-of-print volumes.

MetPublications is made possible by Hunt & Betsy Lawrence.

MetPublications includes a description and table of contents for almost every title, as well as information about the authors, reviews, and awards, and links to related Met titles by author and theme. Current in-print titles may be previewed and fully searched online, with links to purchase the books. The full contents of almost all other titles may be read online, searched, or downloaded as a PDF, at no cost. Books can be read and searched through the Google Book program, an initiative to maximize access to the Met’s books.

A unique feature of MetPublications is that many out-of-print books are now available through print-on-demand capabilities, with copies offered for purchase through Yale University Press. At the launch of the program, 140 titles will be available in print-on-demand paperbound copies with digitally printed color reproductions.

Readers are also directed to every title located in the online library catalogues WorldCat, a global catalogue of library collections; and WATSONLINE, the Metropolitan Museum’s catalogue of its own libraries’ holdings.

MetPublications, as of its launch, allows users to:

  • Search 643 books published by The Metropolitan Museum of Art about art and art history by title, author, keyword, publication type, theme, or collection.

  • Read, download, and search the full contents of 368 out-of-print titles.

  • Preview and search the contents of 272 titles that are in print or otherwise unavailable to read fully.

  • Obtain print-on-demand copies of 140 out-of-print titles.

  • Access two online publications, Heilbrunn Timeline of Art History and Connections.

  • Find links to locate all titles at local libraries through WorldCat, and on the Metropolitan Museum library catalogue Watsonline.

  • Find book descriptions, tables of contents, author biographies, press releases and reviews, awards, and related bibliographies by author, theme, and keyword.

  • Explore works of art from the Metropolitan Museum’s collection featured in all titles with links to updated information about each work. This makes it possible to provide updated information about older titles, linking earlier with current scholarship.

  • Rediscover the scholarship of 50 years of publishing dedicated to the arts from this encyclopedic museum.

Publications to be added to the program on a continuing basis include recently published books and online publications, and print titles published by the Metropolitan Museum from 1870 to 1964, as well as print-on-demand options for out-of-print titles.

MetPublications was created by the staff of the Metropolitan Museum’s Editorial and Digital Media departments.

MetPublications: www.metmuseum.org/metpublications

Library of Congress unveils Congress.gov public beta site

On September 19, 2012, the Library of Congress, in collaboration with the US Senate, House of Representatives and the Government Printing Office (GPO), unveiled Congress.gov, a new public beta site for accessing free, fact-based legislative information. Congress.gov features platform mobility, comprehensive information retrieval and user-friendly presentation. Congress.gov, at beta.congress.gov, eventually will replace the public THOMAS system and the congressional legislative information system (LIS).

“The new, more robust platform reaffirms for the twenty-first century Congress’s vision of a vital legislative information resource for all Americans,” said Librarian of Congress James H. Billington:

It is fitting that we announce this new resource within days of Constitution Day, celebrating the establishment of our representative democracy. Continual enhancements to and now reinvention of this resource reflect the Library’s commitment to Congress’s goal to open the legislative process to the American people and promote an informed democracy.

Sen. Charles E. Schumer (D-N.Y.), chairman of the Senate Rules and Administration Committee and the Joint Committee on the Library, said:

The Congress.gov website heralds a new era in presenting congressional information online, with tools and infrastructure unimaginable 17 years ago. Congress.gov will allow people at all levels of experience and expertise to follow legislative developments, access and compare policy proposals, and connect with their senators and representatives.

Rep. Dan Lungren (R-Calif.), Chairman of the Committee on House Administration, said, “Congress.gov will enhance transparency, increase savings for the Library, and provide Congress and the nation the vital legislative information we need to deliberate about our collective public policies.”

Rep. Gregg Harper (R-Miss.), Vice-Chair of the Joint Committee on the Library, said:

I offer my congratulations to the Library on the new Congress.gov website. Since the launch of THOMAS in 1995, Congress has relied on the Library to make the work of Congress available to the public in a coherent, comprehensive way. The Library staff has a strong working relationship with the House, Senate and GPO, which will enable the Library to successfully develop the next generation legislative information website.

THOMAS, named for Jefferson, was launched by the Library in 1995 as a bipartisan initiative of Congress and averages ten million visits each year. The system has been updated over the years, but the foundation can no longer support the capabilities that today’s internet users have come to expect, including access on mobile devices.

Using best practices for retrieving and displaying information, the refined, user-friendly system also will make finding and using legislative information more intuitive, comprehensive and accessible than the existing system. Congress.gov connects the information with a title and URL more readily identified by all constituencies.

The Congress.gov site includes bill status and summary, bill text and member profiles and the following new features:

  • effective display on mobile devices;

  • ability to narrow and refine search results;

  • ability to simultaneously search all content across all available years, with some files dating from the 93rd congress;

  • easier identification of current bill status;

  • members’ legislative history and biographical profiles; and

  • maintenance of existing features such as links to video of the House and Senate floor, top searched bills and the save/share feature.

Data for the information system is provided by multiple legislative branch partners in this effort, including the Office of the Secretary of the Senate, the Office of the Clerk of the US House of Representatives, the Office of the Senate Sergeant at Arms, the Office of the Chief Administrative Office of the US House of Representatives and the GPO.

The project was chaired by Deputy Librarian of Congress Robert Dizard Jr Development of the new site was a collaborative effort drawing from expertise across the library, including technical experts from the Office of Strategic Initiatives and congressional protocol and subject matter experts from the Congressional Research Service and the Law Library of Congress.

The library is releasing Congress.gov as a beta site to enable a period of time for collecting user feedback and refining functionality while other content is incorporated. Other data, such as the Congressional Record, committee reports, nominations, treaties and communications, will be incorporated over time in a planned, prioritized order. The library anticipates Congress.gov will operate as a beta site for approximately one year as this work is completed. During that time, both THOMAS and LIS will continue to operate as usual.

Congress.gov (beta): http://beta.congress.gov/

DOAB: Directory of Open Access Books

The primary aim of the Directory of Open Access Books (DOAB) is to increase discoverability of Open Access books. Academic publishers are invited to provide metadata of their Open Access books to DOAB. Metadata will be harvestable in order to maximize dissemination, visibility and impact. Aggregators can integrate the records in their commercial services and libraries can integrate the directory into their online catalogues, helping scholars and students to discover the books. The directory will be open to all publishers who publish academic, peer reviewed books in Open Access and should contain as many books as possible, provided that these publications are in Open Access and meet academic standards.

DOAB supports the OAI protocol for metadata harvesting (OAI-PMH). Service providers and libraries can use the protocol to harvest the metadata of the records from DOAB for inclusion in their collections and catalogues.

Libraries and aggregators can also download the list of records in DOAB in a comma separated format (available at: www.doabooks.org/doab?func=csv). Then they can import the file to Excel or some other software program for further use.

DOAB determines requirements for participation by publishers, in consultation with the participating publishers and DOAB Advisory Board. The current requirements have been specified by the OAPEN Foundation in consultation with the Open Access Scholarly Publishers Association (OASPA). The current requirements to take part in DOAB are twofold:

  1. 1.

    Academic books in DOAB shall be available under an Open Access license (such as a Creative Commons license).

  2. 2.

    Academic books in DOAB shall be subjected to independent and external peer review prior to publication.

The policies and procedures regarding peer review and licensing should be clearly outlined on the publisher web site.

The DOAB is a service of OAPEN Foundation. The OAPEN Foundation is an international initiative, based at the National Library in The Hague, dedicated to Open Access monograph publishing. DOAB is being developed in close cooperation with Lars Bjørnshauge and Salam Baker Shanawa (director of SemperTool), who were also responsible for the development of the Directory of Open Access Journals (DOAJ). SemperTool develops and maintains DOAB system.

As part of the user needs research, an open and online discussion was hosted on the DOAB mailing list and the DOABlog from the 9 until the 22 of July. This online discussion with publishers, scholars, and with the wider Open Access and publishing community was focused on getting an overview of opinions and views that exist on Open Access books, and quality control, peer review and Open Access publishing of books. Digests of the daily discussion have been posted to the DOABlog at: http://doabooks.wordpress.com/category/discussion/

DOAB: www.doabooks.org/

De Gruyter develops new model for patron driven acquisition

The idea behind Patron Driven Acquisition (PDA) is simple: to offer users access to all digital content, but only charge for actual use. In partnership with the University of Hagen, Jülich Research Center, and University of Mannheim, De Gruyter recently completed a one-year trial of PDA, an innovative form of distribution that provides users with full content access prior to purchase. Based on the insights of this trial, De Gruyter has developed a new and consistent PDA model, which it is showcasing at the 2012 Frankfurt Book Fair, October 10-14.

The trial focused on answering three questions that have concerned libraries and publishers: is PDA an economically sustainable strategy for both publishers and libraries? What metrics for determining PDA fees, such as usage levels or the size of the library, are applicable to the market as a whole? How should the PDA service be structured when offered by a publisher, in contrast to retailers and aggregators? The trial was supervised by Professor Michael Seadle, Director of the Berlin School for Library and Information Science at the Humboldt University of Berlin.

“PDA is an excellent model for providing academic content to research institutes in a particularly cost-effective way,” says Katrin Siems, Vice President of Marketing and Sales at De Gruyter:

During the trial we witnessed an increase in usage statistics for our content, and, based on the trial’s insights, have developed a distribution model that is oriented to the needs and concerns of libraries.

Libraries can rent full access to over 450,000 journal articles and book chapters as well as over 15 million database entries. At the end of the rental period the paid fees can then be applied to the purchase of desired content. “The advantages are clear,” Katrin Siems explains. “Libraries are able to provide their users with a large volume of content, but have the flexibility to only acquire content that is actually used”.

With regard to retailers, Katrin Siems has a positive assessment of the opportunity for cooperation: “It is important for De Gruyter to involve retailers in this model.”

Based upon the trial with three libraries and the accompanying survey, De Gruyter has developed a business model for PDA that takes into account eight important criteria:

  1. 1.

    Unlimited access is provided to all patrons for all content during the utilization period; no supervision of PDA.

  2. 2.

    The model does not rely on data concerning each library’s previous expenditures.

  3. 3.

    A maximum expenditure limit reduces cost risks for the library.

  4. 4.

    A minimum expenditure limit reduces revenue risks for the publishing house.

  5. 5.

    Libraries can choose between format types and subject areas (STM/Social Sciences and Humanities).

  6. 6.

    Libraries may convert their PDA fees into permanent ownership rights.

  7. 7.

    Librarians remain involved in acquisition.

  8. 8.

    Libraries are not charged for previously acquired content with usage under the PDA model.

For more information on De Gruyter’s model for PDA: www.degruyter.com/page/428

E-books: developments and policy considerations – report from OECD

Books have undergone a massive transformation from a physical object to something entirely different: the electronic book, or “e-book”. A new report, “E-books: developments and policy considerations,” provides background on e-book markets and examines various policy issues related to e-books. These include differing tax rates in countries between physical books and e-books, consumer lock-in to specific platforms, limitations on how users can read and share their purchased content, and a lack of transparency about how data on their reading habits is being used.

This report is part of an Organization for Economic Co-operation and Development (OECD) series on digital content. Other studies include online news, public sector information, film and video, user-created content, mobile content, online computer games, music, and scientific publishing. The OECD Directorate for Science, Technology and Industry (STI) undertakes a wide range of activities to better understand how information and communication technologies (ICTs) contribute to sustainable economic growth and social well-being. The OECD Digital Economy Papers series covers a broad range of ICT-related issues and makes selected studies available to a wider readership. They include policy reports, which are officially declassified by an OECD Committee, and occasional working papers, which are meant to share early knowledge.

The October 2012 report “E-books: developments and policy considerations” is available to download in several formats (PDF; EPUB, e-reader version, e.g. for Apple and generic e-book readers; and MOBI, Amazon Kindle version) at: www.oecd.org/sti/interneteconomy/e-booksdevelopmentsandpolicyconsiderations.htm

OECD work on digital content: www.oecd.org/internet/interneteconomy/oecdworkondigitalcontent.htm

Kindred works, experimental recommender service from OCLC Research

There are many ways to find a new book to read or movie to view. OCLC Research has developed an experimental service that provides a list of items similar to an item of interest, an approach called content-based recommendation. The prototype service uses various characteristics of a sample work, such as classification numbers, subject headings, and genre terms, to retrieve related resources from WorldCat.

The recommendations are accessible through a user interface and through an application programming interface (API). The user interface, Kindred Works, provides basic search functionality; users can search by author, title, ISBN, or OCLC number. The recommendations may include books, ebooks, audiobooks, music, and video materials.

For a library that participates in WorldCat.org, the recommendations can be customized to the collection of the individual library by adding the library’s OCLC holding symbol to the query. The API is intended to be used by software developers to integrate recommendations into another service or application, for example, a library catalog or other discovery interface.

The prototype and user interface is available at: http://experimental.worldcat.org/kindredworks/

The API is available at: http://experimental.worldcat.org/recommender/

Kindred Works Activity Page: www.oclc.org/research/activities/kindredworks.html

assignFAST: new prototype service for efficient FAST subject assignment

OCLC Research has announced the availability of assignFAST, a new web service that automates the manual selection of faceted application of subject terminology (FAST) subjects (the authorized and use for headings) based on autosuggest technology.

Subject assignment is a two-phase task. The first phase is intellectual: reviewing the material and selecting the correct heading. The second phase is more mechanical: finding the correct form of the heading, along with any diacritics; cutting and pasting it into the cataloging interface; and potentially correcting formatting and subfield coding. If authority control is available in the interface, some of these tasks may be automated.

assignFAST consolidates the entire second phase of the manual process of subject assignment into a single step based on autosuggest technology. The service can easily be added to an existing browser based interface, providing both subject selection and authority control in a single step.

A demo is provided at: http://experimental.worldcat.org/fast/assignfast/. This interface is only intended to show how the feature can be integrated into an existing interface. The web service is available at: www.oclc.org/developer/services/assignfast/. Three potential cataloging formats are given: a common (non-MARC) format, an OCLC Connexion®-style format, and a MARCBreaker-style format. Other formats could be added by a programmer by following these examples.

Since this functionality takes up little space, it could also be useful as an OpenSocial gadget, which can be added to the OCLC WorldShareTM Platform or other OpenSocial platforms. Three gadget URLs are provided in the example cataloging formats mentioned above.

assignFAST takes advantage of the features of FAST. Faceting, along with a fully controlled vocabulary, allows simple selection and authority control to take place that would be hard to accomplish in LCSH.

assignFAST Activity Page: www.oclc.org/research/activities/assignfast.html

assignFAST demo: http://experimental.worldcat.org/fast/assignfast/

assignFAST web service: www.oclc.org/developer/services/assignfast/

FAST: www.oclc.org/research/activities/fast.html

JSTOR enabled data mining project signals next wave in research

A team of researchers led by Jevin West and Carl Bergstrom of the University of Washington have released the results of an 18-month long study of gender inequality among authors of academic papers. The study is based on an analysis of the authors of more than 1.8 million published research articles available through the not-for-profit digital library, JSTOR.

This project exemplifies the kind of research made possible by new digital technologies that JSTOR has supported for more than a decade and that was first publicized in 1999 by the work of Yale University legal scholar and law librarian Fred Shapiro. Shapiro used data from JSTOR to document first uses of words that pre-dated the Oxford English Dictionary.

Fast forward to 2008 when JSTOR launched its self-service data for Research web site enabling anyone in the world to explore its holdings and to freely create datasets for use in their research. Today the site sees about 700 datasets created and downloaded annually. Larger scale projects like the one undertaken by West, Bergstrom and their co-authors: Jennifer Jacquet, Molly King, Shelley Correll, and Theodore Bergstrom are handled upon request and in close collaboration with JSTOR’s Advanced Technologies Research team.

“It’s beyond exciting to see the digital library we have spent years creating being tapped into by computer scientists, digital humanists, and other researchers around the world,” said Ronald Snyder, Director of the Advanced Technologies Research team.

“By providing us information about millions of papers published over centuries, these data allow us to ask questions about the structure of scholarly communication on unprecedented scales,” said Bergstrom. “We see the gender project as just the beginning,” added West. “The data really is a gold mine, and we are excited to continue to work with JSTOR and utilize this powerful research environment.”

While the research itself is ground-breaking, the benefits of projects like the one just released by the West-Bergstrom team can reach beyond the findings themselves. The West-Bergstrom team also created an interactive tool that allows others to explore the underlying content based on the work they have done. This demonstrates how sharing large corpora of data can also lead to the creation of new ways of exploring and discovery scholarship – effectively giving researchers another lens through which to view the published literature.

“Enabling new scholarship that was previously impossible, or nearly so, is at the very heart of our mission to advance education through the use of new technologies,” said Laura Brown, JSTOR Managing Director:

As more scholars and students across disciplines are trained in data mining and textual analysis, we look forward to supporting and advancing their work through our Data for Research Program.

JSTOR (www.jstor.org) is a digital library of more than 1,500 academic journals, books, and primary sources. JSTOR helps people discover, use, and build upon a wide range of content through a powerful research and teaching platform, and preserves this content for future generations. JSTOR is part of ITHAKA, a not-for-profit organization that also includes Ithaka S+R and Portico.

Read more about West and Bergstrom’s research at: www.eigenfactor.org/about.php

More information on JSTOR’s Data for Research Program: http://dfr.jstor.org/??view=text&&helpview=about_dfr

Young Americans and reading, library use, and online research – new reports from Pew Internet

Two reports recently released by the Pew Research Center’s Internet & American Life Project looked at the reading and library habits of younger Americans, and how they are using digital technologies to do research.

More than eight in ten Americans ages 16-29 read a book in the past year, and six in ten used their local public library. Many say they are reading more in the era of digital content, especially on their mobile phones and on computers.

These findings come from a new report from Pew that examines younger Americans’ reading and library use habits during the rise of e-content. This research is part of a larger effort to assess the reading and library use habits of all Americans ages 16 and older and these findings will be reported at the Internet Librarian 2012 Conference, October 22-24 in Monterey, California.

According to a nationally representative poll by the Pew Research Center’s Internet and American Life Project:

  • 83 percent of Americans between the ages of 16 and 29 read a book in the past year. Some 75 percent read a print book, 19 percent read an e-book, and 11 percent listened to an audiobook.

  • Among Americans who read e-books, those under age 30 are more likely to read their e-books on a cell phone (41 percent) or computer (55 percent) than on an e-book reader such as a Kindle (23 percent) or tablet (16 percent).

  • Overall, 47 percent of younger Americans read long-form e-content such as books, magazines or newspapers. E-content readers under age 30 are more likely than older e-content readers to say that they are reading more these days due to the availability of e-content (40 percent vs 28 percent).

  • About half (48 percent) of readers under age 30 said they had purchased their most recently read book. Another 24 percent said they had borrowed it from a friend or family member, and 14 percent said they borrowed it from a library.

The report also examines younger Americans’ library usage, and what e-book-related services they might be interested at their local libraries:

  • 60 percent of Americans under age 30 used the library in the past year. Some 46 percent used the library for research, 38 percent borrowed books (print books, audiobooks, or e-books), and 23 percent borrowed newspapers, magazines, or journals.

  • High-school-aged readers were more likely to have borrowed the last book they read from the library (37 percent) than they are to have bought it (26 percent). This pattern soon reverses for older age groups – almost six in ten readers in their late twenties said they had purchased their last book.

  • Many young e-book readers do not know they can borrow an e-book from a library. Among those ages 16-29 who have not borrowed an e-book from the library, 52 percent said they were unaware they could do so.

  • A majority of non-borrowers under age 30 expressed an interest in doing so on pre-loaded e-readers. Some 58 percent of those under age 30 who do not currently borrow e-books from libraries say they would be “very” or “somewhat” likely to borrow pre-loaded e-readers if their library offered that service.

“High schoolers stand out in several ways. We found that libraries are a large part of how readers ages 16-17 get their books, more so than older adults. These high schoolers are more likely than other age groups to use the library, including for research and book-borrowing,” said Kathryn Zickuhr of the Pew Research Center’s Internet and American Life Project, a co-author of the report:

Yet their appreciation for these library services doesn’t quite match up – almost half of 16-17 year-olds say that the library is not important or ’not too important’ to them and their family, significantly more than other age groups.

The main findings in this report, including all statistics and quantitative data, are from a nationally-representative phone survey of 2,986 people ages 16 and older that was administered from November 16-December 21, 2011. This report also contains the voices and insights of an online panel of library patrons ages 16-29 who borrow e-books, fielded in the spring of 2012.

Read the full report at: http://libraries.pewinternet.org/2012/10/23/younger-americans-reading-and-library-habits/

A second recent survey by Pew, “How teens do research in the digital world”, finds that the teachers who instruct the most advanced American secondary school students render mixed verdicts about students’ research habits and the impact of technology on their studies.

Some 77 percent of advanced placement (AP) and national writing project (NWP) teachers surveyed say that the internet and digital search tools have had a “mostly positive” impact on their students’ research work. But 87 percent say these technologies are creating an “easily distracted generation with short attention spans” and 64 percent say today’s digital technologies “do more to distract students than to help them academically.”

According to this survey of teachers, conducted by the Pew Research Center’s Internet and American Life Project in collaboration with the College Board and the NWP, the internet has opened up a vast world of information for today’s students, yet students’ digital literacy skills have yet to catch up:

  • Virtually all (99 percent) AP and NWP teachers in this study agree with the notion that “the internet enables students to access a wider range of resources than would otherwise be available,” and 65 percent agree that “the internet makes today’s students more self-sufficient researchers.”

  • At the same time, 76 percent of teachers surveyed “strongly agree” with the assertion that internet search engines have conditioned students to expect to be able to find information quickly and easily.

  • Large majorities also agree with the notion that the amount of information available online today is overwhelming to most students (83 percent) and that today’s digital technologies discourage students from using a wide range of sources when conducting research (71 percent).

  • Fewer teachers, but still a majority of this sample (60 percent), agree with the assertion that today’s technologies make it harder for students to find credible sources of information.

  • Given these concerns, it is not surprising that 47 percent of these teachers strongly agree and another 44 percent somewhat believe that courses and content focusing on digital literacy should be incorporated into every school’s curriculum.

Data collection for the survey was conducted in two phases. In phase one, Pew Internet conducted two online and one in-person focus group with middle and high school teachers; focus group participants included AP teachers, teachers who had participated in the NWP’s Summer Institute, as well as teachers at a College Board school in the Northeast USA. Two in-person focus groups were also conducted with students in grades 9-12 from the same College Board school. The goal of these discussions was to hear teachers and students talk about, in their own words, the different ways they feel digital technologies such as the internet, search engines, social media, and cell phones are shaping students’ research and writing habits and skills. Teachers were asked to speak in depth about teaching research and writing to middle and high school students today, the challenges they encounter, and how they incorporate digital technologies into their classrooms and assignments.

Focus group discussions were instrumental in developing a 30-minute online survey, which was administered in phase two of the research to a national sample of middle and high school teachers. The survey results reported here are based on a non-probability sample of 2,462 middle and high school teachers currently teaching in the USA, Puerto Rico, and the US Virgin Islands. Of these 2,462 teachers, 2,067 completed the entire survey; all percentages reported are based on those answering each question. The sample is not a probability sample of all teachers because it was not practical to assemble a sampling frame of this population. Instead, two large lists of teachers were assembled: one included 42,879 AP teachers who had agreed to allow the College Board to contact them (about one-third of all AP teachers), while the other was a list of 5,869 teachers who participated in the NWP’s Summer Institute during 2007-2011 and who were not already part of the AP sample. A stratified random sample of 16,721 AP teachers was drawn from the AP teacher list, based on subject taught, state, and grade level, while all members of the NWP list were included in the final sample.

The online survey was conducted from March 7 to April 23, 2012. More details on how the survey and focus groups were conducted are included in the Methodology section at the end of this report, along with focus group discussion guides and the survey instrument.

Read the full report at: www.pewinternet.org/Reports/2012/Student-Research.aspx

Emerging technologies in academic libraries: emtacl12 papers now available

Academic libraries face specific challenges – emerging technologies present new opportunities. emtacl12 (Emerging Technologies in Academic Libraries) is a technology-oriented conference for information professionals working in higher education. The program for the 2012 conference held October 1-3 in Trondheim, Norway, focused on gathering the most respected speakers within their fields to provide ideas and inspiration that will help shape the work of the information profession in academic institutions.

Presentation materials from emtacl12 are now available, including keynote speeches from Herbert Van de Sompel, Los Alamos National Laboratory (“Paint-Yourself-In-The-Corner Infrastructure”), and Karen Coyle, library technology consultant (“Think Different”).

Presentations from emtacl12: http://emtacl.com/presentations/

emtacl on Facebook: www.facebook.com/pages/Emtacl/79318928229

Issue 18 of the Code4Lib Journal now available

The Code4Lib Journal’s mission is to foster community and share information. Editor Ron Peterson hopes that reading the articles in this issue will help you develop your own ideas and solutions, and share ideas with the community. The articles in issue 18 include:

“Prototyping as a process for improved user experience with library and archives websites,” by Shaun Ellis and Maureen Callahan. Prototypes can be persuasive tools for proposing changes within an organization through “imagine if” scenarios. In redesigning the Princeton University Finding Aids site (http://findingaids.princeton.edu), we used a flexible subset of agile practices based around measurable goals, iterative prototypes, meetings with institutional stakeholders, and “discount usability testing” to deliver an innovative and much-improved user experience. This article discusses how integrating relatively untested, but promising new ideas for online finding aids required us to adopt a development process that would allow us to better understand the goals of both general and staff users and in turn foster an environment for innovation that thrives on collaboration, iteration, and managed risk.

“Hacking 360 Link: a hybrid approach,” by John Durno. When the University of Victoria Libraries switched from a locally-hosted link resolver (SFX) to a vendor-hosted link resolver (360Link), new strategies were required to effectively integrate the vendor-hosted link resolver with the Libraries’ other systems and services. Custom javascript is used to add links to the 360Link page; these links then point at local PHP code running on UVic servers, which can then redirect to appropriate local service or display a form directly. An open source PHP OpenURL parser class is announced. Consideration is given to the importance of maintaining open protocols and standards in the transition to vendor-hosted services.

“Jarrow, electronic thesis and dissertation software,” by James MacDonald and Daniel Yule.

Collecting and disseminating theses and dissertations electronically is not a new concept. Tools and platforms have emerged to handle various components of the submission and distribution process. However, there is not a tool that handles the entirety of the process from the moment the student begins work on their thesis to the dissemination of the final thesis. The authors have created such a tool which they have called Jarrow. After reviewing available open-source software for theses submission and open-source institutional repository software this paper discusses why and how Jarrow was created and how it works.

“A hybrid solution for improving single sign-on to a proxy service with Squid and EZproxy through Shibboleth and ExLibris’ Aleph X-Server,” by Alexander Jerabek and Minh-Quang Nguyen. This paper describes an implementation of a hybrid solution for improving the library’s proxy service by integrating Shibboleth and ExLibris’ Aleph’s X-server using a proxy server running both EZproxy and Squid applications. The main benefit of this solution is that instead of relying on e-resource vendors to become Shibboleth-compliant, we are able to prepare and deploy a Shibboleth-ready environment while granting our patrons reliable and stable access to e-resources via different types of connections. As of December 2011, the hybrid solution is running in production.

“Modular mobile application design,” by Jim Hahn and Nathaniel Ryckman. This article describes the development of the Minrva library app for Android phones. The decisions to build a native application with Java and use a modular design are discussed. The application includes five modules: catalog search, in-building navigation, a barcode scanning feature, and up to date notifications of circulating technology availability. The article also reports on the findings of two rounds of usability testing and the plans for future development of the app.

“Patron-driven expedited cataloging enhancement to WebPAC Pro,” by Steven Jay Bernstein.

This article outlines the development of an integrated patron-driven expedited cataloging feature in the catalog of the Connecticut State University Library System (CONSULS). The proposed enhancement to the library’s Innovative Millennium ILS provides users with a direct method for obtaining newly-arrived library materials and allows the Cataloging and Metadata Services Departments at the four Connecticut State University campuses a way to better identify priority materials in their queues. While the project was developed with a single ILS in mind, the idea behind it can easily be implemented on most any other integrated library system.

“Using PHP to parse eBook resources from Drupal 6 to populate a mobile web page,” by Junior Tidal. The Ursula C. Schwerin library needed to create a page for its mobile web site devoted to subscribed eBooks. These resources were organized using the Drupal 6 content management system with contributed and core modules. It was necessary to create a solution to retrieve the eBook databases from the Drupal installation to a separate mobile site.

“LibALERTS: an author-level subscription system,” by Matt Weaver. Patron requests for the ability to subscribe to their favorite authors so they could receive notifications when new titles are released, presented an opportunity for Westlake Porter Public Library to learn, to build, and to engage with patrons on the development of a new service. The library’s libALERTS service, which launched in June 2012, was the culmination of a process that involved the development of a Drupal-based web site augmented with a hand-coded preprocess interface that addressed critical concerns for the effectiveness of the service.

Code4Lib Journal issue 18: http://journal.code4lib.org/issues/issue18

Related articles