Search results

1 – 10 of over 7000
Article
Publication date: 1 October 2002

Jody Condit Fagan

Server‐side include (SSI) codes allow Webmasters to insert content into their Web pages on‐the‐fly without programming knowledge. Using these codes effectively can mean an…

Abstract

Server‐side include (SSI) codes allow Webmasters to insert content into their Web pages on‐the‐fly without programming knowledge. Using these codes effectively can mean an exponential decrease in the time spent maintaining a large or medium‐sized Web site. Most Web servers have server‐side functionality to some extent; a few allow great flexibility with if‐then statements and the ability to set variables. This article describes the functionality of SSI, how to enable the codes on a Web server, and a step‐by‐step process for implementing them. Examples of their use on a large academic library’s Web site are included for illustration.

Details

The Electronic Library, vol. 20 no. 5
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 1 October 2006

Gi Woong Yun, Jay Ford, Robert P. Hawkins, Suzanne Pingree, Fiona McTavish, David Gustafson and Haile Berhe

This paper seeks to discuss measurement units by comparing the internet use and the traditional media use, and to understand internet use from the traditional media use…

Abstract

Purpose

This paper seeks to discuss measurement units by comparing the internet use and the traditional media use, and to understand internet use from the traditional media use perspective.

Design/methodology/approach

Benefits and shortcomings of two log file types will be carefully and exhaustively examined. Client‐side and server‐side log files will be analyzed and compared with proposed units of analysis.

Findings

Server‐side session time calculation was remarkably reliable and valid based on the high correlation with the client‐side time calculation. The analysis result revealed that the server‐side log file session time measurement seems more promising than the researchers previously speculated.

Practical implications

An ability to identify each individual user and low caching problems were strong advantages for the analysis. Those web design implementations and web log data analysis scheme are recommended for future web log analysis research.

Originality/value

This paper examined the validity of the client‐side and the server‐side web log data. As a result of the triangulation of two datasets, research designs and propose analysis schemes could be recommended.

Details

Internet Research, vol. 16 no. 5
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 1 February 2016

Mhamed Zineddine

– The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.

1382

Abstract

Purpose

The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.

Design/methodology/approach

A new algorithm was formulated based on best existing algorithms to optimize the existing traffic caused by web crawlers, which is approximately 40 percent of all networking traffic. The crux of this approach is that web servers monitor and log changes and communicate them as an XML file to search engines. The XML file includes the information necessary to generate refreshed pages from existing ones and reference new pages that need to be crawled. Furthermore, the XML file is compressed to decrease its size to the minimum required.

Findings

The results of this study have shown that the traffic caused by search engines’ crawlers might be reduced on average by 84 percent when it comes to text content. However, binary content faces many challenges and new algorithms have to be developed to overcome these issues. The proposed approach will certainly mitigate the deep web issue. The XML files for each domain used by search engines might be used by web browsers to refresh their cache and therefore help reduce the traffic generated by normal users. This reduces users’ perceived latency and improves response time to http requests.

Research limitations/implications

The study sheds light on the deficiencies and weaknesses of the algorithms monitoring changes and generating binary files. However, a substantial decrease of traffic is achieved for text-based web content.

Practical implications

The findings of this research can be adopted by web server software and browsers’ developers and search engine companies to reduce the internet traffic caused by crawlers and cut costs.

Originality/value

The exponential growth of web content and other internet-based services such as cloud computing, and social networks has been causing contention on available bandwidth of the internet network. This research provides a much needed approach to keeping traffic in check.

Details

Internet Research, vol. 26 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 4 April 2008

C.I. Ezeife, Jingyu Dong and A.K. Aggarwal

The purpose of this paper is to propose a web intrusion detection system (IDS), SensorWebIDS, which applies data mining, anomaly and misuse intrusion detection on web environment.

Abstract

Purpose

The purpose of this paper is to propose a web intrusion detection system (IDS), SensorWebIDS, which applies data mining, anomaly and misuse intrusion detection on web environment.

Design/methodology/approach

SensorWebIDS has three main components: the network sensor for extracting parameters from real‐time network traffic, the log digger for extracting parameters from web log files and the audit engine for analyzing all web request parameters for intrusion detection. To combat web intrusions like buffer‐over‐flow attack, SensorWebIDS utilizes an algorithm based on standard deviation (δ) theory's empirical rule of 99.7 percent of data lying within 3δ of the mean, to calculate the possible maximum value length of input parameters. Association rule mining technique is employed for mining frequent parameter list and their sequential order to identify intrusions.

Findings

Experiments show that proposed system has higher detection rate for web intrusions than SNORT and mod security for such classes of web intrusions like cross‐site scripting, SQL‐Injection, session hijacking, cookie poison, denial of service, buffer overflow, and probes attacks.

Research limitations/implications

Future work may extend the system to detect intrusions implanted with hacking tools and not through straight HTTP requests or intrusions embedded in non‐basic resources like multimedia files and others, track illegal web users with their prior web‐access sequences, implement minimum and maximum values for integer data, and automate the process of pre‐processing training data so that it is clean and free of intrusion for accurate detection results.

Practical implications

Web service security, as a branch of network security, is becoming more important as more business and social activities are moved online to the web.

Originality/value

Existing network IDSs are not directly applicable to web intrusion detection, because these IDSs are mostly sitting on the lower (network/transport) level of network model while web services are running on the higher (application) level. Proposed SensorWebIDS detects XSS and SQL‐Injection attacks through signatures, while other types of attacks are detected using association rule mining and statistics to compute frequent parameter list order and their maximum value lengths.

Details

International Journal of Web Information Systems, vol. 4 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 January 1991

Howard Falk

LANs (Local Area Networks) offer a flexible way to interconnect personal computers with each other, and with shared resources such as large central disk files, printers, and…

Abstract

LANs (Local Area Networks) offer a flexible way to interconnect personal computers with each other, and with shared resources such as large central disk files, printers, and larger computers.

Details

The Electronic Library, vol. 9 no. 1
Type: Research Article
ISSN: 0264-0473

Article
Publication date: 13 September 2022

Haixiao Dai, Phong Lam Nguyen and Cat Kutay

Digital learning systems are crucial for education and data collected can analyse students learning performances to improve support. The purpose of this study is to design and…

Abstract

Purpose

Digital learning systems are crucial for education and data collected can analyse students learning performances to improve support. The purpose of this study is to design and build an asynchronous hardware and software system that can store data on a local device until able to share. It was developed for staff and students at university who are using the limited internet access in areas such as remote Northern Territory. This system can asynchronously link the users’ devices and the central server at the university using unstable internet.

Design/methodology/approach

A Learning Box has been build based on minicomputer and a web learning management system (LMS). This study presents different options to create such a system and discusses various approaches for data syncing. The structure of the final setup is a Moodle (Modular Object Oriented Developmental Learning Environment) LMS on a Raspberry Pi which provides a Wi-Fi hotspot. The authors worked with lecturers from X University who work in remote Northern Territory regions to test this and provide feedback. This study also considered suitable data collection and techniques that can be used to analyse the available data to support learning analysis by the staff. This research focuses on building an asynchronous hardware and software system that can store data on a local device until able to share. It was developed for staff and students at university who are using the limited internet access in areas such as remote Northern Territory. This system can asynchronously link the users’ devices and the central server at the university using unstable internet. Digital learning systems are crucial for education, and data collected can analyse students learning performances to improve support.

Findings

The resultant system has been tested in various scenarios to ensure it is robust when students’ submissions are collected. Furthermore, issues around student familiarity and ability to use online systems have been considered due to early feedback.

Research limitations/implications

Monitoring asynchronous collaborative learning systems through analytics can assist students learning in their own time. Learning Hubs can be easily set up and maintained using micro-computers now easily available. A phone interface is sufficient for learning when video and audio submissions are supported in the LMS.

Practical implications

This study shows digital learning can be implemented in an offline environment by using a Raspberry Pi as LMS server. Offline collaborative learning in remote communities can be achieved by applying asynchronized data syncing techniques. Also asynchronized data syncing can be reliably achieved by using change logs and incremental syncing technique.

Social implications

Focus on audio and video submission allows engagement in higher education by students with lower literacy but higher practice skills. Curriculum that clearly supports the level of learning required for a job needs to be developed, and the assumption that literacy is part of the skilled job in the workplace needs to be removed.

Originality/value

To the best of the authors’ knowledge, this is the first remote asynchronous collaborative LMS environment that has been implemented. This provides the hardware and software for opportunities to share learning remotely. Material to support low literacy students is also included.

Details

Interactive Technology and Smart Education, vol. 21 no. 1
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 1 December 2005

Yan Han

To recommend an integrated server platform and demonstrate a real‐world example that provides high availability (HA) and better data management to meet various systems' computing…

Abstract

Purpose

To recommend an integrated server platform and demonstrate a real‐world example that provides high availability (HA) and better data management to meet various systems' computing needs.

Design/methodology/approach

The paper overviews theoretical background and real‐world implementations for HA and Storage Area Networks (SANs). A systems analysis process is described and a platform was built to integrate the HA and the SAN for an academic library's critical web server and its content management system. Recommendations for selection and implementation are suggested for people adapting this approach.

Findings

The integrated platform is a generic approach to provide the HA and better data management for servers, consisting of Linux/Windows clustering technologies and SANs. Its theoretical background and real‐world implementations proved its ability to meet library's various computing needs.

Practical implications

The integrated platform provides HA for various computing services such as web servers, file servers, databases, and DNS. A systems analysis is recommended for using this platform.

Originality/value

The paper suggests an approach to achieve better server management. It provides an opportunity for IT managers to consider and achieve better server management in order to meet users' raising expectations.

Details

The Electronic Library, vol. 23 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 1 April 1990

Clifford A. Lynch

The nature of information retrieval applications, the Z39.50 protocol, and its relationship to other OSI protocols are described. Through Z39.50 a client system views a remote…

Abstract

The nature of information retrieval applications, the Z39.50 protocol, and its relationship to other OSI protocols are described. Through Z39.50 a client system views a remote server's database as an information resource, not merely a collection of data. Z39.50 allows a client to build queries in terms of logical information elements supported by the server. It also provides a framework for transmitting queries, managing results, and controlling resources. Sidebars describe the Z39.50 Implementors Group, the Z39.50 Maintenance Agency, and international standards for OSI library application protocols.

Details

Library Hi Tech, vol. 8 no. 4
Type: Research Article
ISSN: 0737-8831

Article
Publication date: 1 January 1998

Wei Ma

This paper discusses the complementary nature of two media, Web access and CD networks, with emphasis on three points: (1) combining Web access and CD networks — their importance…

Abstract

This paper discusses the complementary nature of two media, Web access and CD networks, with emphasis on three points: (1) combining Web access and CD networks — their importance and feasibility; (2) the benefits and necessity of considering the community network environment as a whole, rather than focusing on the particular library; (3) the need for flexibility in considering new technologies. There is not a unique model for each library to implement. As well as introducing a few new products, the paper describes the experience at the Occidental College Library (Los Angeles, California, USA) to indicate the possibility of building such a network; the possibility of sharing a network and network file server; and the workload/flow between the Computer Centre and the library, and between the student worker and the library CD network administrator.

Details

The Electronic Library, vol. 16 no. 1
Type: Research Article
ISSN: 0264-0473

Article
Publication date: 29 June 2010

Elhadi Shakshuki and Abdur Rafey Matin

Intelligent agents are becoming an essential part of collaborative virtual environments. The purpose of this paper is to present an architecture of a learning agent that is able…

Abstract

Purpose

Intelligent agents are becoming an essential part of collaborative virtual environments. The purpose of this paper is to present an architecture of a learning agent that is able to utilize machine learning techniques to monitor the user's actions.

Design/methodology/approach

A learning agent is developed and integrated into federated collaborative virtual workspace.

Findings

The experimental results showed that the combination of genetic algorithms and reinforcement learning algorithms provides the agent with better learning capability resulting in better predictions for the user.

Originality/value

This paper provides experimental results and a performance analysis in terms of accuracy of predictions, processing time, and memory utilization of the agent.

Details

International Journal of Pervasive Computing and Communications, vol. 6 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

1 – 10 of over 7000