Search results
1 – 10 of over 11000Gi Woong Yun, Jay Ford, Robert P. Hawkins, Suzanne Pingree, Fiona McTavish, David Gustafson and Haile Berhe
This paper seeks to discuss measurement units by comparing the internet use and the traditional media use, and to understand internet use from the traditional media use…
Abstract
Purpose
This paper seeks to discuss measurement units by comparing the internet use and the traditional media use, and to understand internet use from the traditional media use perspective.
Design/methodology/approach
Benefits and shortcomings of two log file types will be carefully and exhaustively examined. Client‐side and server‐side log files will be analyzed and compared with proposed units of analysis.
Findings
Server‐side session time calculation was remarkably reliable and valid based on the high correlation with the client‐side time calculation. The analysis result revealed that the server‐side log file session time measurement seems more promising than the researchers previously speculated.
Practical implications
An ability to identify each individual user and low caching problems were strong advantages for the analysis. Those web design implementations and web log data analysis scheme are recommended for future web log analysis research.
Originality/value
This paper examined the validity of the client‐side and the server‐side web log data. As a result of the triangulation of two datasets, research designs and propose analysis schemes could be recommended.
Details
Keywords
Most electronic journals are now Web‐based. This paper introduces the method of WWW server log file analysis and its application to evaluating electronic journals services and in…
Abstract
Most electronic journals are now Web‐based. This paper introduces the method of WWW server log file analysis and its application to evaluating electronic journals services and in monitoring their usage. Following a short description on the method and its possible application, the main results of a study of WWW server log file analysis of the electronic journal “Review of Information Science” will be presented and discussed. Finally, several concluding remarks will be given.
The use of technology in Saudi Arabian higher education is constantly evolving. With the support of the 2030 Saudi vision, many research studies have started covering learning…
Abstract
The use of technology in Saudi Arabian higher education is constantly evolving. With the support of the 2030 Saudi vision, many research studies have started covering learning analytics and Big Data in the Saudi Arabian higher education. Examining learning analytics in higher education institutions promise transforming the learning experience to maximize students' learning potential. With the thousands of students' transactions recorded in various learning management systems (LMS) in Saudi educational institutions, the need to explore and research learning analytics in Saudi Arabia has caught the interest of scholars and researchers regionally and internationally. This chapter explores a Saudi private university in Jeddah, Saudi Arabia, and examines its rich learning analytics and discovers the knowledge behind it. More than 300,000 records of LMS analytical data were collected from a consecutive 4-year historic data. Romero, Ventura, and Garcia (2008) educational data mining process was applied to collect and analyze the analytical reports. Statistical and trend analysis were applied to examine and interpret the collected data. The study has also collected lecturers' testimonies to support the collected analytical data. The study revealed a transformative pedagogy that impact course instructional design and students' engagement.
Details
Keywords
Alesia Zuccala, Mike Thelwall, Charles Oppenheim and Rajveen Dhiensa
The purpose of this paper is to explore the use of LexiURL as a Web intelligence tool for collecting and analysing links to digital libraries, focusing specifically on the…
Abstract
Purpose
The purpose of this paper is to explore the use of LexiURL as a Web intelligence tool for collecting and analysing links to digital libraries, focusing specifically on the National electronic Library for Health (NeLH).
Design/methodology/approach
The Web intelligence techniques in this study are a combination of link analysis (web structure mining), web server log file analysis (web usage mining), and text analysis (web content mining), utilizing the power of commercial search engines and drawing upon the information science fields of bibliometrics and webometrics. LexiURL is a computer program designed to calculate summary statistics for lists of links or URLs. Its output is a series of standard reports, for example listing and counting all of the different domain names in the data.
Findings
Link data, when analysed together with user transaction log files (i.e. Web referring domains) can provide insights into who is using a digital library and when, and who could be using the digital library if they are “surfing” a particular part of the Web; in this case any site that is linked to or colinked with the NeLH. This study found that the NeLH was embedded in a multifaceted Web context, including many governmental, educational, commercial and organisational sites, with the most interesting being sites from the.edu domain, representing American Universities. Not many links directed to the NeLH were followed on September 25, 2005 (the date of the log file analysis and link extraction analysis), which means that users who access the digital library have been arriving at the site via only a few select links, bookmarks and search engine searches, or non‐electronic sources.
Originality/value
A number of studies concerning digital library users have been carried out using log file analysis as a research tool. Log files focus on real‐time user transactions; while LexiURL can be used to extract links and colinks associated with a digital library's growing Web network. This Web network is not recognized often enough, and can be a useful indication of where potential users are surfing, even if they have not yet specifically visited the NeLH site.
Details
Keywords
As has been described else where, web log files are a useful source of information about visitor site use, navigation behaviour, and, to some extent, demographics. But log files…
Abstract
As has been described else where, web log files are a useful source of information about visitor site use, navigation behaviour, and, to some extent, demographics. But log files can also reveal the existence of both web pages and search engine queries that are sources of new visitors.This study extracts such information from a single web log files and uses it to illustrate its value, not only to th site owner but also to those interested in investigating the online behaviour of web users.
Details
Keywords
Hamid R. Jamali, David Nicholas and Paul Huntington
To provide a review of the log analysis studies of use and users of scholarly electronic journals.
Abstract
Purpose
To provide a review of the log analysis studies of use and users of scholarly electronic journals.
Design/methodology/approach
The advantages and limitations of log analysis are described and then past studies of e‐journals' use and users that applied this methodology are critiqued. The results of these studies will be very briefly compared with some survey studies. Those aspects of online journals' use and users studies that log analysis can investigate well and those aspects that log analysis can not disclose enough information about are highlighted.
Findings
The review indicates that although there is a debate about reliability of the results of log analysis, this methodology has great potential for studying online journals' use and their users' information seeking behaviour.
Originality/value
This paper highlights the strengths and weaknesses of log analysis for studying digital journals and raises a couple of questions to be investigated by further studies.
Details
Keywords
Khaled A. Mohamed and Ahmed Hassan
This paper aims to examine the behaviour of the Egyptian scholars while accessing electronic resources through two federated search tools. The main purpose of this article is to…
Abstract
Purpose
This paper aims to examine the behaviour of the Egyptian scholars while accessing electronic resources through two federated search tools. The main purpose of this article is to provide guidance for federated search tool technicians and support teams about user issues, including the need for training.
Design/methodology/approach
Log files were exploited to examine the behaviour of users of information retrieval systems. This study examined two log files extracted from federated search tools available to the Egyptian scholars' community for accessing electronic resources. A data mining approach was implemented to investigate user behaviour through deep analysis of these logs.
Findings
Results show that: none of the available tools provide error messages for dummy queries; most of the Egyptian scholars had short queries; Boolean operators are not used in about 50 per cent of the queries; federated search tools do not provide techniques for query reformation; the optimal days for system maintenance are the non‐weekend vacations; and early morning is the best time for maintenance.
Practical implications
To maximise the value of the federated search tools by understanding user trends when utilising federated search tools. The study shows that more attention should be given to the search capabilities through ongoing training and awareness in order to maximise the benefit from the available resources and tools.
Originality/value
The hypothetical value of the federated search tools has not been previously examined and analysed to understand user trends.
Details
Keywords
The purpose of this article is to alert researchers to software for web tracking of information seeking behaviour, and to offer a list of criteria that will make it easier to…
Abstract
Purpose
The purpose of this article is to alert researchers to software for web tracking of information seeking behaviour, and to offer a list of criteria that will make it easier to select software. A selection of research projects based on web tracking as well as the benefits and disadvantages of web tracking are also explored.
Design/methodology/approach
An overview of the literature, including clarification of key concepts, a brief overview of studies of web information seeking behaviour based on web tracking, identification of software used, as well as the strengths and short‐comings noted for web tracking is used as a background to the identification of criteria for the selection of web tracking software.
Findings
Web tracking can offer very valuable information for the development of websites, portals, digital libraries, etc. It, however, needs to be supplemented by qualitative studies, and researchers need to ensure that the tracking software will collect the data required.
Research limitations/implications
The criteria is not applied to any software in particular.
Practical implications
The criteria can be used by researchers working on web usage and web information seeking behaviour to select suitable tracking software.
Originality/value
Although there are many reports on the use of web tracking (also reported in this article), nothing could be traced on criteria for the evaluation of web tracking software.
Details
Keywords
Daniel Hofer, Markus Jäger, Aya Khaled Youssef Sayed Mohamed and Josef Küng
For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases…
Abstract
Purpose
For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases, timestamps are the only linking points between events caused by attackers, faulty systems or simple errors and their corresponding entries in log files. With the idea of storing and analyzing this log information in graph databases, we need a suitable model to store and connect timestamps and their events. This paper aims to find and evaluate different approaches how to store timestamps in graph databases and their individual benefits and drawbacks.
Design/methodology/approach
We analyse three different approaches, how timestamp information can be represented and stored in graph databases. For checking the models, we set up four typical questions that are important for log file analysis and tested them for each of the models. During the evaluation, we used the performance and other properties as metrics, how suitable each of the models is for representing the log files’ timestamp information. In the last part, we try to improve one promising looking model.
Findings
We come to the conclusion, that the simplest model with the least graph database-specific concepts in use is also the one yielding the simplest and fastest queries.
Research limitations/implications
Limitations to this research are that only one graph database was studied and also improvements to the query engine might change future results.
Originality/value
In the study, we addressed the issue of storing timestamps in graph databases in a meaningful, practical and efficient way. The results can be used as a pattern for similar scenarios and applications.
Details
Keywords
Ljubomir Paskali, Lidija Ivanovic and Dragan Ivanović
The purpose of this paper is to determine the digital library usage patterns as a means of improving the system, as well as the user experience, to give appropriate recognition to…
Abstract
Purpose
The purpose of this paper is to determine the digital library usage patterns as a means of improving the system, as well as the user experience, to give appropriate recognition to the most popular dissertations’ authors and to measure the interest of non-academic users for dissertations defended at the University of Novi Sad (UNS).
Design/methodology/approach
A logging module of the digital library of theses and dissertations of University of Novi Sad (PHD UNS) application has been implemented. The module recorded the messages relating to the search queries and downloads over a three-year period from 2017–2019. These logs are analysed using the Elasticsearch, Logstash and Kibana (ELK) technology stack and the results are shown using graphs and tables.
Findings
The analysis determined the perfect time for weekly maintenance of the system, defined a recommendation for improving the system and revealed the most popular dissertations. A significant number of downloads and queries originated from citizens, i.e. users outside the academic community.
Practical implications
The conducted analysis defined recommendations for the system improvement which can be used by PHD UNS research and development (R&D) team and revealed the most popular dissertations which are used for the promotion of its authors through faculties’ websites.
Originality/value
To the best of the authors’ knowledge, this is the first study of ELK based log analysis of a Serbian language documents’ repository. Besides, the value of results for the PHD UNS R&D team and UNS rector team, the study proves that PhD digital library presents an important Open Science communication channel for presenting scientific results to the citizens.
Details