Search results1 – 6 of 6
This study investigates an approach to book metrics for research evaluation that takes into account the complexity of scholarly monographs. This approach is based on work…
This study investigates an approach to book metrics for research evaluation that takes into account the complexity of scholarly monographs. This approach is based on work sets – unique scholarly works and their within-work related bibliographic entities – for scholarly monographs in national databases for research output.
This study examines bibliographic records on scholarly monographs acquired from four European databases (VABB in Flanders, Belgium; CROSBI in Croatia; CRISTIN in Norway; COBISS in Slovenia). Following a data enrichment process using metadata from OCLC WorldCat and Amazon Goodreads, the authors identify work sets and the corresponding ISBNs. Next, on the basis of the number of ISBNs per work set and the presence in WorldCat, they design a typology of scholarly monographs: Globally visible single-expression works, Globally visible multi-expression works, Miscellaneous and Globally invisible works.
The findings show that the concept “work set” and the proposed typology can aid the identification of influential scholarly monographs in the social sciences and humanities (i.e. the Globally visible multi-expression works).
In light of the findings, the authors outline requirements for the bibliographic control of scholarly monographs in national databases for research output that facilitate the use of the approach proposed here.
The authors use insights from library and information science (LIS) to construct complexity-sensitive book metrics. In doing so, the authors, on the one hand, propose a solution to a problem in research evaluation and, on the other hand, bring to attention the need for a dialogue between LIS and neighbouring communities that work with bibliographic data.
– The purpose of this paper is to assess the value of Goodreads reader ratings for measuring the wider impact of scholarly books published in the field of History.
The purpose of this paper is to assess the value of Goodreads reader ratings for measuring the wider impact of scholarly books published in the field of History.
Book titles were extracted from the reference lists of articles that appeared in 604 history journals indexed in Scopus (2007-2011). The titles were cleaned and matched with WorldCat.org (for publisher information) as well as Goodreads (for reader ratings) using an API. A set of 8,538 books was first filtered based on Dewey Decimal Classification class 900 “History and Geography”, then a subset of 997 books with the highest citations and reader ratings (i.e. top 25 per cent) was analysed separately based on additional characteristics.
A weak correlation (0.212) was found between citation counts and reader rating counts for the full data set (n=8,538). An additional correlation for the subset of 997 books indicated a similar weak correlation (0.190). Further correlations between citations, reader ratings, written reviews, and library holdings indicate that a reader rating on Goodreads was more likely to be given to a book held in an international library, including both public and academic libraries.
Research on altmetrics has focused almost exclusively on scientific journal articles appearing on social media services (e.g. Twitter, Facebook). In this paper we show the potential of Goodreads reader ratings to identify the impact of books beyond academia. As a unique altmetric data source, Goodreads can allow scholarly authors from the social sciences and humanities to measure the wider impact of their books.
The purpose of this paper is to explore the use of LexiURL as a Web intelligence tool for collecting and analysing links to digital libraries, focusing specifically on the…
The purpose of this paper is to explore the use of LexiURL as a Web intelligence tool for collecting and analysing links to digital libraries, focusing specifically on the National electronic Library for Health (NeLH).
The Web intelligence techniques in this study are a combination of link analysis (web structure mining), web server log file analysis (web usage mining), and text analysis (web content mining), utilizing the power of commercial search engines and drawing upon the information science fields of bibliometrics and webometrics. LexiURL is a computer program designed to calculate summary statistics for lists of links or URLs. Its output is a series of standard reports, for example listing and counting all of the different domain names in the data.
Link data, when analysed together with user transaction log files (i.e. Web referring domains) can provide insights into who is using a digital library and when, and who could be using the digital library if they are “surfing” a particular part of the Web; in this case any site that is linked to or colinked with the NeLH. This study found that the NeLH was embedded in a multifaceted Web context, including many governmental, educational, commercial and organisational sites, with the most interesting being sites from the.edu domain, representing American Universities. Not many links directed to the NeLH were followed on September 25, 2005 (the date of the log file analysis and link extraction analysis), which means that users who access the digital library have been arriving at the site via only a few select links, bookmarks and search engine searches, or non‐electronic sources.
A number of studies concerning digital library users have been carried out using log file analysis as a research tool. Log files focus on real‐time user transactions; while LexiURL can be used to extract links and colinks associated with a digital library's growing Web network. This Web network is not recognized often enough, and can be a useful indication of where potential users are surfing, even if they have not yet specifically visited the NeLH site.