Search results
1 – 10 of over 47000Mulki Indana Zulfa, Rudy Hartanto and Adhistya Erna Permanasari
Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web content is…
Abstract
Purpose
Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web content is one strategy that can be used to speed up response time. This strategy is divided into three main techniques, namely, Web caching, Web prefetching and application-level caching. The purpose of this paper is to put forward a literature review of caching strategy research that can be used in Web-based applications.
Design/methodology/approach
The methods used in this paper were as follows: determined the review method, conducted a review process, pros and cons analysis and explained conclusions. The review method is carried out by searching literature from leading journals and conferences. The first search process starts by determining keywords related to caching strategies. To limit the latest literature in accordance with current developments in website technology, search results are limited to the past 10 years, in English only and related to computer science only.
Findings
Note in advance that Web caching and Web prefetching are slightly overlapping techniques because they have the same goal of reducing latency on the user’s side. But actually, the two techniques are motivated by different basic mechanisms. Web caching uses the basic mechanism of cache replacement or the algorithm to change cache objects in memory when the cache capacity is full, whereas Web prefetching uses the basic mechanism of predicting cache objects that can be accessed in the future. This paper also contributes practical guidelines for choosing the appropriate caching strategy for Web-based applications.
Originality/value
This paper conducts a state-of-the art review of caching strategies that can be used in Web applications. Exclusively, this paper presents taxonomy, pros and cons of selected research and discusses data sets that are often used in caching strategy research. This paper also provides another contribution, namely, practical instructions for Web developers to decide the caching strategy.
Details
Keywords
Evagelos Varthis, Marios Poulos, Ilias Giarenis and Sozon Papavlasopoulos
This study aims to provide a system capable of static searching on a large number of unstructured texts directly on the Web domain while keeping costs to a minimum. The proposed…
Abstract
Purpose
This study aims to provide a system capable of static searching on a large number of unstructured texts directly on the Web domain while keeping costs to a minimum. The proposed framework is applied to the unstructured texts of Migne’s Patrologia Graeca (PG) collection, setting PG as an implementation example of the method.
Design/methodology/approach
The unstructured texts of PG have automatically transformed to a read-only not only Structured Query Language (NoSQL) database with a structure identical to that of a representational state transfer access point interface. The transformation makes it possible to execute queries and retrieve ranked results based on a specialized application of the extended Boolean model.
Findings
Using a specifically built Web-browser-based search tool, the user can quickly locate ranked relevant fragments of texts with the ability to navigate back and forth. The user can search using the initial part of words and by ignoring the diacritics of the Greek language. The performance of the search system is comparatively examined when different versions of hypertext transfer protocol (Http) are used for various network latencies and different modes of network connections. Queries using Http-2 have by far the best performance, compared to any of Http-1.1 modes.
Originality/value
The system is not limited to the case study of PG and has a generic application in the field of humanities. The expandability of the system in terms of semantic enrichment is feasible by taking into account synonyms and topics if they are available. The system’s main advantage is that it is totally static which implies important features such as simplicity, efficiency, fast response, portability, security and scalability.
Details
Keywords
Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to…
Abstract
Purpose
Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to empirically evaluate the potential application of Stochastic Gradient Boosting Trees (SGBT) as a novel model for enhanced prediction of vulnerable Web components compared to common, popular and recent machine learning models.
Design/methodology/approach
An empirical study was conducted where the SGBT and 16 other prediction models have been trained, optimized and cross validated using vulnerability data sets from multiple versions of two open-source Web applications written in PHP. The prediction performance of these models have been evaluated and compared based on accuracy, precision, recall and F-measure.
Findings
The results indicate that the SGBT models offer improved prediction over the other 16 models and thus are more effective and reliable in predicting vulnerable Web components.
Originality/value
This paper proposed a novel application of SGBT for enhanced prediction of vulnerable Web components and showed its effectiveness.
Details
Keywords
Nassiriah Shaari, Stuart Charters and Clare Churcher
Accessing web sites from mobile devices has been gaining popularity but may often do not give the same results and experiences as accessing them from a personal computer. The…
Abstract
Purpose
Accessing web sites from mobile devices has been gaining popularity but may often do not give the same results and experiences as accessing them from a personal computer. The paper aims to discuss these issues.
Design/methodology/approach
To address these issues, the paper presents a server-side adaptation approach to prioritising adaptive pages to different devices through prioritisation system. The prioritisation approach allows users to prioritise page items for different devices. The prioritisation engine reorders, shows, and removes items based on its priority set by users or developers.
Findings
With this approach, the overall web page's structure is preserved and the same terminology, content, and similar location of content are delivered to all devices. A user trial and a performance test were conducted. Results show that adaptive page and prioritisation provides a consistent and efficient web experience across different devices.
Originality/value
The approach provides advantages over both client-side and proxy and has conducted significant experimentation to determine the applicability and effectiveness of the approach.
Details
Keywords
Rebeca Schroeder, Denio Duarte and Ronaldo dos Santos Mello
Designing efficient XML schemas is essential for XML applications which manage semi‐structured data. On generating XML schemas, there are two opposite goals: to avoid redundancy…
Abstract
Purpose
Designing efficient XML schemas is essential for XML applications which manage semi‐structured data. On generating XML schemas, there are two opposite goals: to avoid redundancy and to provide connected structures in order to achieve good performance on queries. In general, highly connected XML structures allow data redundancy, and redundancy‐free schemas generate disconnected XML structures. The purpose of this paper is to describe and evaluate by experiments an approach which balances such trade‐off through a workload analysis. Additionally, it aims to identify the most accessed data based on the workload and suggest indexes to improve access performance.
Design/methodology/approach
The paper applies and evaluates a workload‐aware methodology to provide indexing and highly connected structures for data which are intensively accessed through paths traversed by the workload.
Findings
The paper presents benchmarking results on a set of design approaches for XML schemas and demonstrates that the XML schemas generated by the approach provide high query performance and low cost of data redundancy on balancing the trade‐off on XML schema design.
Research limitations/implications
Although an XML benchmark is applied in these experiments, further experiments are expected in a real‐world application.
Practical implications
The approach proposed may be applied in a real‐world process for designing new XML databases as well as in reverse engineering process to improve XML schemas from legacy databases.
Originality/value
Unlike related work, the reported approach integrates the two opposite goal in the XML schema design, and generates suitable schemas according to a workload. An experimental evaluation shows that the proposed methodology is promising.
Details
Keywords
Timothy W. Cole, William H. Mischo, Thomas G. Habing and Robert H. Ferrer
Describes an approach to the processing and presentation of online full‐text journals that utilizes several evolving information technologies, including extensible markup language…
Abstract
Describes an approach to the processing and presentation of online full‐text journals that utilizes several evolving information technologies, including extensible markup language (XML) and extensible stylesheet language transformations (XSLT). Discusses major issues and trade‐offs associated with these technologies, and also specific lessons learned from our use of these technologies in the Illinois Testbed of full‐text journal articles. Focuses especially on issues associated with the representation of documents in XML, techniques to create and normalize metadata describing XML document instances, XSLT features employed in the Illinois Testbed, and trade‐offs of different XSLT implementation options. Pays special attention to techniques for transforming between XML and HTML formats for rendering in today’s commercial Web browsers.
Details
Keywords
Adeleh Asemi, Asefeh Asemi and Hamid Tahaei
The objective of this research was to develop a new and highly accurate approach based on a fuzzy inference system (FIS) for the evaluation of usability based on ISO…
Abstract
Purpose
The objective of this research was to develop a new and highly accurate approach based on a fuzzy inference system (FIS) for the evaluation of usability based on ISO 9241-210:2019. In this study, a fully automated method of usability evaluation is used for interactive systems with a special look at interactive social robots.
Design/methodology/approach
Fuzzy logic uses as an intelligent computing technique to deal with uncertainty and incomplete data. Here this system is implemented using MATLAB fuzzy toolbox. This system attempted to quantify four criteria that correlate highly with ISO 9241-210:2019 criteria for the evaluation of interactive systems with maximum usability. Also, the system was evaluated with standard cases of computer interactive systems usability evaluation. The system did not need to train various data and to check the rules. Just small data were used to fine-tune the fuzzy sets. The results were compared against experimental usability evaluation with the statistical analysis.
Findings
It is found that there was a high strong linear relation between the FIS usability assessment and System Usability Scale (SUS) based usability assessment, and authors’ new method provides reliable results in the estimation of the usability.
Research limitations/implications
In human-robot systems, human performance plays an important role in the performance of social interactive systems. In the present study, the proposed system has considered all the necessary criteria for designing an interactive system with a high level of user because it is based on ISO 9241-210:2019.
Practical implications
For future research, the system could be expanded with the training of historical data and the production of rules through integrating FIS and neural networks.
Originality/value
This system considered all essential criteria for designing an interactive system with a high level of usability because it is based on ISO 9241-210:2019. For future research, the system could be expanded with the training of historical data and the production of rules through integrating FIS and neural networks.
Details
Keywords
Guoqing Zhao, Jana Suklan, Shaofeng Liu, Carmen Lopez and Lise Hunter
In a competitive environment, eHealth small and medium-sized enterprises’ (SMEs’) barriers to survival differ from those of large enterprises. Empirical research on barriers to…
Abstract
Purpose
In a competitive environment, eHealth small and medium-sized enterprises’ (SMEs’) barriers to survival differ from those of large enterprises. Empirical research on barriers to eHealth SMEs in less prosperous areas has been largely neglected. This study fills this gap by employing an integrated approach to analyze barriers to the development of eHealth SMEs. The purpose of this paper is to address this issue.
Design/methodology/approach
The authors collected data through semi-structured interviews and conducted thematic analysis to identify 16 barriers, which were used as inputs into total interpretive structural modeling (TISM) to build interrelationships among them and identify key barriers. Cross-impact matrix multiplication applied to classification (MICMAC) was then applied validate the TISM model and classify the 16 barriers into four categories.
Findings
This study makes significant contributions to theory by identifying new barriers and their interrelationships, distinguishing key barriers and classifying the barriers into four categories. The authors identify that transcultural problems are the key barrier and deserve particular attention. eHealth SMEs originating from regions with cultural value orientations, such as hierarchy and embeddedness, that differ from the UK’s affective autonomy orientation should strengthen their transcultural awareness when seeking to expand into UK markets.
Originality/value
By employing an integrated approach to analyze barriers that impede the development of eHealth SMEs in a less prosperous area of the UK, this study raises entrepreneurs’ awareness of running businesses in places with different cultural value orientations.
Details
Keywords
Stuti Tandon, Vijay Kumar and V.B. Singh
Code smells indicate deep software issues. They have been studied by researchers with different perspectives. The need to study code smells was felt from the perspective of…
Abstract
Purpose
Code smells indicate deep software issues. They have been studied by researchers with different perspectives. The need to study code smells was felt from the perspective of software industry. The authors aim to evaluate the code smells on the basis of their scope of impact on widely used open-source software (OSS) projects.
Design/methodology/approach
The authors have proposed a methodology to identify and rank the smells in the source code of 16 versions of Apache Tomcat Software. Further, the authors have analyzed the categorized smells by calculating the weight of the smells using constant weights as well as Best Worst Method (BWM). Consequently, the authors have used Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to determine the rank of versions using constant weights as well as BWM.
Findings
Version 1 of Apache Tomcat has least smell, and version 8 is reported to contain the maximum code smells. Notable differences in both the cases during the trend analysis are reported by the study. The findings also show that increase is observed in the number of code smells with the release of newer versions. This increment is observed till version 8, followed by a subtle marked depreciation in the number of code smells in further releases.
Originality/value
The focus is to analyze smells and rank several versions of Apache Tomcat, one of the most widely used software for code smell study. This study will act as a significant one for the researchers as it prioritizes the versions and will help in narrowing down the options of the software used to study code smell.
Details
Keywords
Ka I. Pun, Yain Whar Si and Kin Chan Pau
Intensive traffic often occurs in web‐enabled business processes hosted by travel industry and government portals. An extreme case for intensive traffic is flash crowd situations…
Abstract
Purpose
Intensive traffic often occurs in web‐enabled business processes hosted by travel industry and government portals. An extreme case for intensive traffic is flash crowd situations when the number of web users spike within a short time due to unexpected events caused by political unrest or extreme weather conditions. As a result, the servers hosting these business processes can no longer handle overwhelming service requests. To alleviate this problem, process engineers usually analyze audit trail data collected from the application server and reengineer their business processes to withstand unexpected surge in the visitors. However, such analysis can only reveal the performance of the application server from the internal perspective. This paper aims to investigate this issue.
Design/methodology/approach
This paper proposes an approach for analyzing key performance indicators of traffic intensive web‐enabled business processes from audit trail data, web server logs, and stress testing logs.
Findings
The key performance indicators identified in the study's approach can be used to understand the behavior of traffic intensive web‐enabled business processes and the underlying factors that affect the stability of the web server.
Originality/value
The proposed analysis also provides an internal as well as an external view of the performance. Moreover, the calculated key performance indicators can be used by the process engineers for locating potential bottlenecks, reengineering business processes, and implementing contingency measures for traffic intensive situations.
Details