Search results

1 – 2 of 2
Article
Publication date: 6 June 2008

K. Belic and D. Surla

The purpose of this paper is implementation of a software system for bibliographic material processing which does not require knowledge about any format for bibliographic data…

Abstract

Purpose

The purpose of this paper is implementation of a software system for bibliographic material processing which does not require knowledge about any format for bibliographic data input. This means that the input can be done not only by librarians, but also by other persons such as authors of bibliographic units, students, employees, etc.

Design/methodology/approach

An object‐oriented methodology for developing information systems by means of computer‐aided software engineering (CASE) tools and software components is used. The software architecture is multi‐layered and web‐based. The implementation is done in Java environment.

Findings

The result is a web application by which users can catalogue bibliographic material without being familiar with the corresponding format. Nevertheless, the bibliographic record is formed in accordance with the given format for bibliographic material processing (MARC21, UNIMARC, etc.).

Research limitations/implications

Automatic generating of screen forms for the chosen set of data for bibliographic material processing is not provided in the presented version of the application. In order to eliminate this limitation, there are preset solutions that can be integrated into the application.

Practical implications

The application is primarily intended for research institutions aiming at forming their electronic catalogue and/or bibliographies of researchers or institutions.

Originality/value

The originality of the paper lies in the software architecture of the application related to the middle layer, i.e. the one of business logic. This layer implements a mechanism by which different sets of input data are mapped to persistent data by means of the unique object model of an accepted format for the bibliographic material processing (MARC21, UNIMARC, or others).

Details

The Electronic Library, vol. 26 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 1 February 2016

Mhamed Zineddine

– The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.

1386

Abstract

Purpose

The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.

Design/methodology/approach

A new algorithm was formulated based on best existing algorithms to optimize the existing traffic caused by web crawlers, which is approximately 40 percent of all networking traffic. The crux of this approach is that web servers monitor and log changes and communicate them as an XML file to search engines. The XML file includes the information necessary to generate refreshed pages from existing ones and reference new pages that need to be crawled. Furthermore, the XML file is compressed to decrease its size to the minimum required.

Findings

The results of this study have shown that the traffic caused by search engines’ crawlers might be reduced on average by 84 percent when it comes to text content. However, binary content faces many challenges and new algorithms have to be developed to overcome these issues. The proposed approach will certainly mitigate the deep web issue. The XML files for each domain used by search engines might be used by web browsers to refresh their cache and therefore help reduce the traffic generated by normal users. This reduces users’ perceived latency and improves response time to http requests.

Research limitations/implications

The study sheds light on the deficiencies and weaknesses of the algorithms monitoring changes and generating binary files. However, a substantial decrease of traffic is achieved for text-based web content.

Practical implications

The findings of this research can be adopted by web server software and browsers’ developers and search engine companies to reduce the internet traffic caused by crawlers and cut costs.

Originality/value

The exponential growth of web content and other internet-based services such as cloud computing, and social networks has been causing contention on available bandwidth of the internet network. This research provides a much needed approach to keeping traffic in check.

Details

Internet Research, vol. 26 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

Access

Year

Content type

Article (2)
1 – 2 of 2