Search results
1 – 10 of over 1000Patrick OBrien, Kenning Arlitsch, Jeff Mixter, Jonathan Wheeler and Leila Belle Sterman
The purpose of this paper is to present data that begin to detail the deficiencies of log file analytics reporting methods that are commonly built into institutional repository…
Abstract
Purpose
The purpose of this paper is to present data that begin to detail the deficiencies of log file analytics reporting methods that are commonly built into institutional repository (IR) platforms. The authors propose a new method for collecting and reporting IR item download metrics. This paper introduces a web service prototype that captures activity that current analytics methods are likely to either miss or over-report.
Design/methodology/approach
Data were extracted from DSpace Solr logs of an IR and were cross-referenced with Google Analytics and Google Search Console data to directly compare Citable Content Downloads recorded by each method.
Findings
This study provides evidence that log file analytics data appear to grossly over-report due to traffic from robots that are difficult to identify and screen. The study also introduces a proof-of-concept prototype that makes the research method easily accessible to IR managers who seek accurate counts of Citable Content Downloads.
Research limitations/implications
The method described in this paper does not account for direct access to Citable Content Downloads that originate outside Google Search properties.
Originality/value
This paper proposes that IR managers adopt a new reporting framework that classifies IR page views and download activity into three categories that communicate metrics about user activity related to the research process. It also proposes that IR managers rely on a hybrid of existing Google Services to improve reporting of Citable Content Downloads and offers a prototype web service where IR managers can test results for their repositories.
Details
Keywords
Yu-Jung Cheng and Kuang-Hua Chen
The present study aims to clarify the following two research objectives: (1) the user behavior of government websites during the coronavirus disease (COVID-19) period and (2) how…
Abstract
Purpose
The present study aims to clarify the following two research objectives: (1) the user behavior of government websites during the coronavirus disease (COVID-19) period and (2) how the government improved government's website design during the COVID-19 period.
Design/methodology/approach
The authors used website analytics to examine usage patterns and behaviors of the government website via personal computer (PC) and mobile devices during the COVID-19 pandemic. In-depth interviews were conducted to understand the user experience of government website users and to gather users' opinions about how government websites should be redesigned.
Findings
With the rising of the COIVID-19 pandemic, most studies expect that the use of government websites through a mobile device will grow astonishingly. The authors uncovered that the COVID-19 pandemic did not increase the use of government websites. Instead, severe declines in website usage were observed for all device users with the declines being more pronounced in mobile device users than in PC users. This is an admonitory caveat that reveals public health and pandemic prevention information announced on government websites cannot be effectively transmitted to the general public through official online platforms.
Originality/value
The study highlights the gap in information behavior and usage patterns between PC and mobile device users when visiting government websites. Although mobile devices brought many new visitors, mobile devices are ineffective in retaining visitors and continuous long-term use. The results of localize experience is helpful in the improvement of government website evaluation worldwide.
Details
Keywords
Joy M. Perrin, Le Yang, Shelley Barba and Heidi Winkler
Digital collection assessment has focused mainly on evaluating systems, metadata and usability. While use evaluation is discussed in the literature, there are no standard criteria…
Abstract
Purpose
Digital collection assessment has focused mainly on evaluating systems, metadata and usability. While use evaluation is discussed in the literature, there are no standard criteria and methods for how to perform assessment on use effectively. This paper asserts that use statistics have complexities that prohibit meaningful interpretation and assessment. The authors aim to discover the problems inherent in the assessment of digital collection use statistics and propose solutions to address such issues.
Design/methodology/approach
This paper identifies and demonstrates five inherent problems with use statistics that need to be addressed when doing assessment for digital collections using the statistics of assessment tools on local digital repositories. The authors then propose solutions to resolve the problems that present themselves upon such analysis.
Findings
The authors identified five problems with digital collection use statistics. Problem one is the difficulty of distinguishing different kinds of internet traffic. Problem two is the lack of direct correlation of a digital item to its multiple URLs, so statistics from external web analytics tools are not ideal. Problem three is the analytics tools’ inherent bias in statistics that are counted only in the positive way. Problem four is the different interaction between digital collections with search engine indexing. Problem five is the evaluator’s bias toward simple growing statistics over time for surmising a positive use assessment. Because of these problems, statistics on digital collections do not properly measure a digital library’s value.
Practical implications
Findings highlight problems with current use measures and offer improvements.
Originality/value
This paper identifies five problems that need to be addressed before a meaningful assessment of digital collection use statistics can take place. The paper ends with a call for evaluators to try to solve or mitigate the stated problems for their digital collections in their own evaluations.
Details
Keywords
Kathryn E. Eccles, Mike Thelwall and Eric T. Meyer
Webometric studies, using hyperlinks between websites as the basic data type, have been used to assess academic networks, the “impact factor” of academic communications and to…
Abstract
Purpose
Webometric studies, using hyperlinks between websites as the basic data type, have been used to assess academic networks, the “impact factor” of academic communications and to analyse the impact of online digital libraries, and the impact of digital scholarly images. This study aims to be the first to use these methods to trace the impact, or success, of digitised scholarly resources in the Humanities. Running alongside a number of other methods of measuring impact online, the webometric study described here also aims to assess whether it is possible to measure a resource's impact using webometric analysis.
Design/methodology/approach
Link data were collected for five target project sites and a range of comparator sites.
Findings
The results show that digitised resources online can leave traces that can be identified and used to assess their impact. Where digitised resources are situated on shifting URLs, or amalgamated into larger online resources, their impact is difficult to measure with these methods, however.
Originality/value
This study is the first to use webometric methods to probe the impact of digitised scholarly resources in the Humanities.
Details
Keywords
Alireza Amrollahi and Bruce Rowlands
The purpose of this paper is to show how collaborative information technology (IT) tools and a crowdsourcing model can be leveraged for the purpose of strategic planning. To…
Abstract
Purpose
The purpose of this paper is to show how collaborative information technology (IT) tools and a crowdsourcing model can be leveraged for the purpose of strategic planning. To achieve this objective, a formal method of open strategic planning (OSP) is proposed.
Design/methodology/approach
Based on a review of the literature a set of activities, stakeholders, and governing rules are identified in the form of an OSP method. The proposed planning method is implemented in a case study of strategic planning in an Australian university. Observations by the research team, and archival records were used to ascertain the relevance of the used method.
Findings
A method for OSP is presented and assessed. The method contains four phases: pre-planning, idea submission, idea refinement, and plan development. These phases cover the activities required from conceptualization to preparing and publishing the strategic plan. The findings clarify how the principles of OSP helped the organization to include more stakeholders and provided the opportunity to make the planning process transparent through use of a collaborative IT tool.
Practical implications
The study provides managers and planning consultants with detailed guidelines to implement the concept of open strategy.
Originality/value
This study is among the few to propose a method for OSP based on empirical research. The study also shows how collaborative IT tools can be used for high-level organizational tasks such as strategic planning.
Details
Keywords
The use of technology in Saudi Arabian higher education is constantly evolving. With the support of the 2030 Saudi vision, many research studies have started covering learning…
Abstract
The use of technology in Saudi Arabian higher education is constantly evolving. With the support of the 2030 Saudi vision, many research studies have started covering learning analytics and Big Data in the Saudi Arabian higher education. Examining learning analytics in higher education institutions promise transforming the learning experience to maximize students' learning potential. With the thousands of students' transactions recorded in various learning management systems (LMS) in Saudi educational institutions, the need to explore and research learning analytics in Saudi Arabia has caught the interest of scholars and researchers regionally and internationally. This chapter explores a Saudi private university in Jeddah, Saudi Arabia, and examines its rich learning analytics and discovers the knowledge behind it. More than 300,000 records of LMS analytical data were collected from a consecutive 4-year historic data. Romero, Ventura, and Garcia (2008) educational data mining process was applied to collect and analyze the analytical reports. Statistical and trend analysis were applied to examine and interpret the collected data. The study has also collected lecturers' testimonies to support the collected analytical data. The study revealed a transformative pedagogy that impact course instructional design and students' engagement.
Details
Keywords
Kazuo Nakatani and Ta‐Tao Chuang
The purpose of this paper is to develop an analytical hierarchy process (AHP)‐based selection model for choosing a web analytics product/service that meets organizational needs.
Abstract
Purpose
The purpose of this paper is to develop an analytical hierarchy process (AHP)‐based selection model for choosing a web analytics product/service that meets organizational needs.
Design/methodology/approach
The research objective is achieved through modeling and empirical validation.
Findings
While more criteria could be added, the proposed selection model provides a feasible approach to choosing a web analytics product/service. Cost‐ and risk‐related criteria are weighed heavier than those of technical capabilities. Tools based on the page tagging method are more popular than those based on transaction log file analysis. The level of technology savvy might play a role in the application of the selection model.
Research limitations/implications
The development of web analytics products/service is still evolving. Thus, as the use of web analytics increases, more criteria might be identified and added to the model. The model is validated by groups for different sectors. In the future, it is suggested to conduct a similar study with one sector by different groups.
Practical implications
The selection model provides a process in which practitioners can systematically evaluate pros and cons of web analytics products/services. The selection model includes a comprehensive list of criteria that vendors of web analytics products/services can use to benchmark their products. Following this model, an organization contemplating the use of web analytics will more likely find one product/service that accommodates organizational and technological characteristics.
Originality/value
A sufficiently comprehensive list of qualitative and quantitative criteria for evaluating web analytics products/services was developed. Practitioners will be able to use the model to select a proper tool. In academia, the article fills a gap in literature that might bring academics' interests in this area.
Details
Keywords
Güzin Özdağoğlu, Gülin Zeynep Öztaş and Mehmet Çağliyangil
Learning management systems (LMS) provide detailed information about the processes through event-logs. Process and related data-mining approaches can reveal valuable information…
Abstract
Purpose
Learning management systems (LMS) provide detailed information about the processes through event-logs. Process and related data-mining approaches can reveal valuable information from these files to help teachers and executives to monitor and manage their online learning processes. In this regard, the purpose of this paper is to present an overview of the current direction of the literature on educational data mining, and an application framework to analyze the educational data provided by the Moodle LMS.
Design/methodology/approach
The paper presents a framework to provide a decision support through the approaches existing in process and data-mining fields for analyzing the event-log data gathered from LMS platforms. In this framework, latent class analysis (LCA) and sequential pattern mining approaches were used to understand the general patterns; heuristic and fuzzy approaches were performed for process mining to obtain the workflows and statistics; finally, social-network analysis was conducted to discover the collaborations.
Findings
The analyses conducted in the study give clues for the process performance of the course during a semester by indicating exceptional situations, clarifying the activity flows, understanding the main process flow and revealing the students’ interactions. Findings also show that using the preliminary data analyses before process mining steps is also beneficial to understand the general pattern and expose the irregular ones.
Originality/value
The study highlights the benefits of analyzing event-log files of LMSs to improve the quality of online educational processes through a case study based on Moodle event-logs. The application framework covers preliminary analyses such as LCA before the use of process mining algorithms to reveal the exceptional situations.
Details
Keywords
Rachel K. Fischer, Aubrey Iglesias, Alice L. Daugherty and Zhehan Jiang
The article presents a methodology that can be used to analyze data from the transaction log of EBSCO Discovery Service searches recorded in Google Analytics. It explains the…
Abstract
Purpose
The article presents a methodology that can be used to analyze data from the transaction log of EBSCO Discovery Service searches recorded in Google Analytics. It explains the steps to follow for exporting the data, analyzing the data, and recreating searches. The article provides suggestions to improve the quality of research on the topic. It also includes advice to vendors on improving the quality of transaction log software.
Design/methodology/approach
Case study
Findings
Although Google Analytics can be used to study transaction logs accurately, vendors still need to improve the functionality so librarians can gain the most benefit from it.
Research limitations/implications
The research is applicable to the usage of Google Analytics with EBSCO Discovery Service.
Practical implications
The steps presented in the article can be followed as a step-by-step guide to repeating the study at other institutions.
Social implications
The methodology in this article can be used to assess how library instruction can be improved.
Originality/value
This article provides a detailed description of a transaction log analysis process that other articles have not previously described. This includes a description of a methodology for accurately calculating statistics from Google Analytics data and provides steps for recreating accurate searches from data recorded in Google Analytics.
Details
Keywords
Libraries throughout the world use OCLC’s EZproxy software to manage access to e-resources. When cleaned, processed, visualized and enhanced, these logs paint a valuable picture…
Abstract
Purpose
Libraries throughout the world use OCLC’s EZproxy software to manage access to e-resources. When cleaned, processed, visualized and enhanced, these logs paint a valuable picture of a library’s impact on researcher’s lives. The purpose of this paper is to share techniques and procedures for enhancing and de-identifying EZproxy logs using Tableau, a data analytics and visualization software, and Tableau Prep, a tool used for cleaning, combining and shaping data for analysis.
Design/methodology/approach
In February 2018, The Ohio State University Libraries established an automated daily process to extract and clean EZproxy log files. The assessment librarian created a series of procedures in Tableau and Tableau Prep to union, parse and enhance these files by adding information such as user major, user status (faculty, graduate or undergraduate) and the title of the requested resource. She last stripped the data set of identifiers and applied best practices for maintaining confidentiality to visualize the data.
Findings
The data set is currently 1.5m rows and growing. The visualizations may be filtered by date, user status and user department/major where applicable. Safeguards are in place to limit data presentation when filters might reveal a user’s identity.
Originality/value
Tableau used in concert with Tableau Prep allows an assessment librarian to clean and combine data from various sources. Once procedures for cleaning and combining data sources are established, the data driving visualizations can be set to refresh on a set schedule. This expedites the ability of librarians to derive actionable insights from EZproxy data and to share the library’s positive impact on researcher’s lives.
Details