Search results

1 – 10 of over 37000
To view the access options for this content please click here
Article
Publication date: 6 October 2020

Mulki Indana Zulfa, Rudy Hartanto and Adhistya Erna Permanasari

Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web

Abstract

Purpose

Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web content is one strategy that can be used to speed up response time. This strategy is divided into three main techniques, namely, Web caching, Web prefetching and application-level caching. The purpose of this paper is to put forward a literature review of caching strategy research that can be used in Web-based applications.

Design/methodology/approach

The methods used in this paper were as follows: determined the review method, conducted a review process, pros and cons analysis and explained conclusions. The review method is carried out by searching literature from leading journals and conferences. The first search process starts by determining keywords related to caching strategies. To limit the latest literature in accordance with current developments in website technology, search results are limited to the past 10 years, in English only and related to computer science only.

Findings

Note in advance that Web caching and Web prefetching are slightly overlapping techniques because they have the same goal of reducing latency on the user’s side. But actually, the two techniques are motivated by different basic mechanisms. Web caching uses the basic mechanism of cache replacement or the algorithm to change cache objects in memory when the cache capacity is full, whereas Web prefetching uses the basic mechanism of predicting cache objects that can be accessed in the future. This paper also contributes practical guidelines for choosing the appropriate caching strategy for Web-based applications.

Originality/value

This paper conducts a state-of-the art review of caching strategies that can be used in Web applications. Exclusively, this paper presents taxonomy, pros and cons of selected research and discusses data sets that are often used in caching strategy research. This paper also provides another contribution, namely, practical instructions for Web developers to decide the caching strategy.

Details

International Journal of Web Information Systems, vol. 16 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 19 May 2021

Evagelos Varthis, Marios Poulos, Ilias Giarenis and Sozon Papavlasopoulos

This study aims to provide a system capable of static searching on a large number of unstructured texts directly on the Web domain while keeping costs to a minimum. The…

Abstract

Purpose

This study aims to provide a system capable of static searching on a large number of unstructured texts directly on the Web domain while keeping costs to a minimum. The proposed framework is applied to the unstructured texts of Migne’s Patrologia Graeca (PG) collection, setting PG as an implementation example of the method.

Design/methodology/approach

The unstructured texts of PG have automatically transformed to a read-only not only Structured Query Language (NoSQL) database with a structure identical to that of a representational state transfer access point interface. The transformation makes it possible to execute queries and retrieve ranked results based on a specialized application of the extended Boolean model.

Findings

Using a specifically built Web-browser-based search tool, the user can quickly locate ranked relevant fragments of texts with the ability to navigate back and forth. The user can search using the initial part of words and by ignoring the diacritics of the Greek language. The performance of the search system is comparatively examined when different versions of hypertext transfer protocol (Http) are used for various network latencies and different modes of network connections. Queries using Http-2 have by far the best performance, compared to any of Http-1.1 modes.

Originality/value

The system is not limited to the case study of PG and has a generic application in the field of humanities. The expandability of the system in terms of semantic enrichment is feasible by taking into account synonyms and topics if they are available. The system’s main advantage is that it is totally static which implies important features such as simplicity, efficiency, fast response, portability, security and scalability.

Details

International Journal of Web Information Systems, vol. 17 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 21 November 2018

Mahmoud Elish

Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is…

Abstract

Purpose

Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to empirically evaluate the potential application of Stochastic Gradient Boosting Trees (SGBT) as a novel model for enhanced prediction of vulnerable Web components compared to common, popular and recent machine learning models.

Design/methodology/approach

An empirical study was conducted where the SGBT and 16 other prediction models have been trained, optimized and cross validated using vulnerability data sets from multiple versions of two open-source Web applications written in PHP. The prediction performance of these models have been evaluated and compared based on accuracy, precision, recall and F-measure.

Findings

The results indicate that the SGBT models offer improved prediction over the other 16 models and thus are more effective and reliable in predicting vulnerable Web components.

Originality/value

This paper proposed a novel application of SGBT for enhanced prediction of vulnerable Web components and showed its effectiveness.

Details

International Journal of Web Information Systems, vol. 15 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 18 November 2013

Nassiriah Shaari, Stuart Charters and Clare Churcher

Accessing web sites from mobile devices has been gaining popularity but may often do not give the same results and experiences as accessing them from a personal computer…

Abstract

Purpose

Accessing web sites from mobile devices has been gaining popularity but may often do not give the same results and experiences as accessing them from a personal computer. The paper aims to discuss these issues.

Design/methodology/approach

To address these issues, the paper presents a server-side adaptation approach to prioritising adaptive pages to different devices through prioritisation system. The prioritisation approach allows users to prioritise page items for different devices. The prioritisation engine reorders, shows, and removes items based on its priority set by users or developers.

Findings

With this approach, the overall web page's structure is preserved and the same terminology, content, and similar location of content are delivered to all devices. A user trial and a performance test were conducted. Results show that adaptive page and prioritisation provides a consistent and efficient web experience across different devices.

Originality/value

The approach provides advantages over both client-side and proxy and has conducted significant experimentation to determine the applicability and effectiveness of the approach.

To view the access options for this content please click here
Article
Publication date: 16 November 2012

Rebeca Schroeder, Denio Duarte and Ronaldo dos Santos Mello

Designing efficient XML schemas is essential for XML applications which manage semi‐structured data. On generating XML schemas, there are two opposite goals: to avoid…

Abstract

Purpose

Designing efficient XML schemas is essential for XML applications which manage semi‐structured data. On generating XML schemas, there are two opposite goals: to avoid redundancy and to provide connected structures in order to achieve good performance on queries. In general, highly connected XML structures allow data redundancy, and redundancy‐free schemas generate disconnected XML structures. The purpose of this paper is to describe and evaluate by experiments an approach which balances such trade‐off through a workload analysis. Additionally, it aims to identify the most accessed data based on the workload and suggest indexes to improve access performance.

Design/methodology/approach

The paper applies and evaluates a workload‐aware methodology to provide indexing and highly connected structures for data which are intensively accessed through paths traversed by the workload.

Findings

The paper presents benchmarking results on a set of design approaches for XML schemas and demonstrates that the XML schemas generated by the approach provide high query performance and low cost of data redundancy on balancing the trade‐off on XML schema design.

Research limitations/implications

Although an XML benchmark is applied in these experiments, further experiments are expected in a real‐world application.

Practical implications

The approach proposed may be applied in a real‐world process for designing new XML databases as well as in reverse engineering process to improve XML schemas from legacy databases.

Originality/value

Unlike related work, the reported approach integrates the two opposite goal in the XML schema design, and generates suitable schemas according to a workload. An experimental evaluation shows that the proposed methodology is promising.

To view the access options for this content please click here
Article
Publication date: 1 September 2001

Timothy W. Cole, William H. Mischo, Thomas G. Habing and Robert H. Ferrer

Describes an approach to the processing and presentation of online full‐text journals that utilizes several evolving information technologies, including extensible markup…

Abstract

Describes an approach to the processing and presentation of online full‐text journals that utilizes several evolving information technologies, including extensible markup language (XML) and extensible stylesheet language transformations (XSLT). Discusses major issues and trade‐offs associated with these technologies, and also specific lessons learned from our use of these technologies in the Illinois Testbed of full‐text journal articles. Focuses especially on issues associated with the representation of documents in XML, techniques to create and normalize metadata describing XML document instances, XSLT features employed in the Illinois Testbed, and trade‐offs of different XSLT implementation options. Pays special attention to techniques for transforming between XML and HTML formats for rendering in today’s commercial Web browsers.

Details

Library Hi Tech, vol. 19 no. 3
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Article
Publication date: 24 August 2021

Stuti Tandon, Vijay Kumar and V.B. Singh

Code smells indicate deep software issues. They have been studied by researchers with different perspectives. The need to study code smells was felt from the perspective of

Abstract

Purpose

Code smells indicate deep software issues. They have been studied by researchers with different perspectives. The need to study code smells was felt from the perspective of software industry. The authors aim to evaluate the code smells on the basis of their scope of impact on widely used open-source software (OSS) projects.

Design/methodology/approach

The authors have proposed a methodology to identify and rank the smells in the source code of 16 versions of Apache Tomcat Software. Further, the authors have analyzed the categorized smells by calculating the weight of the smells using constant weights as well as Best Worst Method (BWM). Consequently, the authors have used Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to determine the rank of versions using constant weights as well as BWM.

Findings

Version 1 of Apache Tomcat has least smell, and version 8 is reported to contain the maximum code smells. Notable differences in both the cases during the trend analysis are reported by the study. The findings also show that increase is observed in the number of code smells with the release of newer versions. This increment is observed till version 8, followed by a subtle marked depreciation in the number of code smells in further releases.

Originality/value

The focus is to analyze smells and rank several versions of Apache Tomcat, one of the most widely used software for code smell study. This study will act as a significant one for the researchers as it prioritizes the versions and will help in narrowing down the options of the software used to study code smell.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

To view the access options for this content please click here
Article
Publication date: 13 April 2012

Ka I. Pun, Yain Whar Si and Kin Chan Pau

Intensive traffic often occurs in web‐enabled business processes hosted by travel industry and government portals. An extreme case for intensive traffic is flash crowd…

Abstract

Purpose

Intensive traffic often occurs in web‐enabled business processes hosted by travel industry and government portals. An extreme case for intensive traffic is flash crowd situations when the number of web users spike within a short time due to unexpected events caused by political unrest or extreme weather conditions. As a result, the servers hosting these business processes can no longer handle overwhelming service requests. To alleviate this problem, process engineers usually analyze audit trail data collected from the application server and reengineer their business processes to withstand unexpected surge in the visitors. However, such analysis can only reveal the performance of the application server from the internal perspective. This paper aims to investigate this issue.

Design/methodology/approach

This paper proposes an approach for analyzing key performance indicators of traffic intensive web‐enabled business processes from audit trail data, web server logs, and stress testing logs.

Findings

The key performance indicators identified in the study's approach can be used to understand the behavior of traffic intensive web‐enabled business processes and the underlying factors that affect the stability of the web server.

Originality/value

The proposed analysis also provides an internal as well as an external view of the performance. Moreover, the calculated key performance indicators can be used by the process engineers for locating potential bottlenecks, reengineering business processes, and implementing contingency measures for traffic intensive situations.

Details

Business Process Management Journal, vol. 18 no. 2
Type: Research Article
ISSN: 1463-7154

Keywords

To view the access options for this content please click here
Article
Publication date: 1 August 2005

May El Barachi, Roch H. Glitho and Rachida Dssouli

Applications offered to end‐users as value‐added services play a vital role in the success of Internet telephony service providers. Today’s standard frameworks for…

Abstract

Applications offered to end‐users as value‐added services play a vital role in the success of Internet telephony service providers. Today’s standard frameworks for developing them have several shortcomings that motivate the need for novel frameworks. Web services are an emerging paradigm for program‐to‐program interactions over the Internet. This paradigm is a prime candidate for application development in Internet Telephony because it may aid in addressing the drawbacks of today’s standard frameworks. This paper presents a case study that gives insights in the suitability of Web services as a standard framework for the development of conferencing applications in Internet Telephony. The case study includes the definition and the implementation of a novel Web service for conferencing, the implementation of the conference server in a SIP environment, the development of several conferencing applications (including a game), and performance evaluation. Based on this case study, we conclude that Web services are very promising for conferencing application development in Internet Telephony, especially as the performance can be significantly improved with the emerging techniques that are briefly discussed in the paper.

Details

International Journal of Web Information Systems, vol. 1 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 1 March 2001

Bhupesh Kothari and Mark Claypool

The World Wide Web has experienced phenomenal growth over the past few years, placing heavy load on Web servers. Today’s Web servers also process an increasing number of

Abstract

The World Wide Web has experienced phenomenal growth over the past few years, placing heavy load on Web servers. Today’s Web servers also process an increasing number of requests for dynamic pages, making server load even more critical. The performance of Web servers delivering static pages is well studied and well understood. However, there has been little analytic or empirical study of the performance of Web servers delivering dynamic pages. This paper focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI and Servlets. In this paper, we present experimental results for Web server performance under CGI, Fast CGI and Servlets. Then, we develop a multivariate linear regression model and predict Web server performance under some typical dynamic requests. We find that CGI and FastCGI perform effectively the same under most low‐level benchmarks, while Servlets perform noticeably worse. Our regression model shows the same deficiency in Servlets’ performance under typical dynamic Web page requests.

Details

Internet Research, vol. 11 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

1 – 10 of over 37000