Search results

1 – 10 of over 1000
Article
Publication date: 6 October 2020

Mulki Indana Zulfa, Rudy Hartanto and Adhistya Erna Permanasari

Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web content is…

Abstract

Purpose

Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web content is one strategy that can be used to speed up response time. This strategy is divided into three main techniques, namely, Web caching, Web prefetching and application-level caching. The purpose of this paper is to put forward a literature review of caching strategy research that can be used in Web-based applications.

Design/methodology/approach

The methods used in this paper were as follows: determined the review method, conducted a review process, pros and cons analysis and explained conclusions. The review method is carried out by searching literature from leading journals and conferences. The first search process starts by determining keywords related to caching strategies. To limit the latest literature in accordance with current developments in website technology, search results are limited to the past 10 years, in English only and related to computer science only.

Findings

Note in advance that Web caching and Web prefetching are slightly overlapping techniques because they have the same goal of reducing latency on the user’s side. But actually, the two techniques are motivated by different basic mechanisms. Web caching uses the basic mechanism of cache replacement or the algorithm to change cache objects in memory when the cache capacity is full, whereas Web prefetching uses the basic mechanism of predicting cache objects that can be accessed in the future. This paper also contributes practical guidelines for choosing the appropriate caching strategy for Web-based applications.

Originality/value

This paper conducts a state-of-the art review of caching strategies that can be used in Web applications. Exclusively, this paper presents taxonomy, pros and cons of selected research and discusses data sets that are often used in caching strategy research. This paper also provides another contribution, namely, practical instructions for Web developers to decide the caching strategy.

Details

International Journal of Web Information Systems, vol. 16 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 7 November 2016

Diogo Tenório Cintra, Ramiro Brito Willmersdorf, Paulo Roberto Maciel Lyra and William Wagner Matos Lira

The purpose of this paper is to present a methodology for parallel simulation that employs the discrete element method (DEM) and improves the cache performance using Hilbert space…

Abstract

Purpose

The purpose of this paper is to present a methodology for parallel simulation that employs the discrete element method (DEM) and improves the cache performance using Hilbert space filling curves (HSFC).

Design/methodology/approach

The methodology is well suited for large-scale engineering simulations and considers modelling restrictions due to memory limitations related to the problem size. An algorithm based on mapping indexes, which does not use excessive additional memory, is adopted to enable the contact search procedure for highly scattered domains. The parallel solution strategy uses the recursive coordinate bisection method in the dynamical load balancing procedure. The proposed memory access control aims to improve the data locality of a dynamic set of particles. The numerical simulations presented here contain up to 7.8 millions of particles, considering a visco-elastic model of contact and a rolling friction assumption.

Findings

A real landslide is adopted as reference to evaluate the numerical approach. Three-dimensional simulations are compared in terms of the deposition pattern of the Shum Wan Road landslide. The results show that the methodology permits the simulation of models with a good control of load balancing and memory access. The improvement in cache performance significantly reduces the processing time for large-scale models.

Originality/value

The proposed approach allows the application of DEM in several practical engineering problems of large scale. It also introduces the use of HSFC in the optimization of memory access for DEM simulations.

Details

Engineering Computations, vol. 33 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Content available
Article
Publication date: 1 February 2001

Emma Pearse

167

Abstract

Details

Library Hi Tech News, vol. 18 no. 2
Type: Research Article
ISSN: 0741-9058

Article
Publication date: 1 August 2016

Bao-Rong Chang, Hsiu-Fen Tsai, Yun-Che Tsai, Chin-Fu Kuo and Chi-Chung Chen

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big…

Abstract

Purpose

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big data environment.

Design/methodology/approach

First, the integration of Apache Hive, Cloudera Impala and BDAS Shark make the platform support SQL-like query. Next, users can access a single interface and select the best performance of big data warehouse platform automatically by the proposed optimizer. Finally, the distributed memory storage system Memcached incorporated into the distributed file system, Apache HDFS, is employed for fast caching query results. Therefore, if users query the same SQL command, the same result responds rapidly from the cache system instead of suffering the repeated searches in a big data warehouse and taking a longer time to retrieve.

Findings

As a result the proposed approach significantly improves the overall performance and dramatically reduces the search time as querying a database, especially applying for the high-repeatable SQL commands under multi-user mode.

Research limitations/implications

Currently, Shark’s latest stable version 0.9.1 does not support the latest versions of Spark and Hive. In addition, this series of software only supports Oracle JDK7. Using Oracle JDK8 or Open JDK will cause serious errors, and some software will be unable to run.

Practical implications

The problem with this system is that some blocks are missing when too many blocks are stored in one result (about 100,000 records). Another problem is that the sequential writing into In-memory cache wastes time.

Originality/value

When the remaining memory capacity is 2 GB or less on each server, Impala and Shark will have a lot of page swapping, causing extremely low performance. When the data scale is larger, it may cause the JVM I/O exception and make the program crash. However, when the remaining memory capacity is sufficient, Shark is faster than Hive and Impala. Impala’s consumption of memory resources is between those of Shark and Hive. This amount of remaining memory is sufficient for Impala’s maximum performance. In this study, each server allocates 20 GB of memory for cluster computing and sets the amount of remaining memory as Level 1: 3 percent (0.6 GB), Level 2: 15 percent (3 GB) and Level 3: 75 percent (15 GB) as the critical points. The program automatically selects Hive when memory is less than 15 percent, Impala at 15 to 75 percent and Shark at more than 75 percent.

Article
Publication date: 31 December 2006

Kwong Yuen Lai, Zahir Tari and Peter Bertok

Caching is commonly used to improve the performance of mobile computers. Due to the limitations of wireless networks (e.g. low bandwidth, intermittent connectivity), ensuring the…

Abstract

Caching is commonly used to improve the performance of mobile computers. Due to the limitations of wireless networks (e.g. low bandwidth, intermittent connectivity), ensuring the consistency of cached data becomes a difficult issue. Existing research have shown that broadcast‐based cache invalidation techniques can effectively maintain cache consistency for mobile applications. However, most existing performance analysis of cache invalidation algorithms were carried out through simulation. Therefore, an analytical study is important to provide a deeper understanding of broadcast‐based invalidation techniques. In this paper, we present detailed analytical models of the major existing cache invalidation schemes. The models provide a basis to highlight the strengths and weaknesses of the different schemes and facilitate further investigation into cache invalidation for mobile environments. Extensive simulation has also been performed, and verifies the accuracy of the models developed.

Details

International Journal of Pervasive Computing and Communications, vol. 2 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 March 2003

Hsiang‐Fu Yu, Yi‐Ming Chen, Shih‐Yong Wang and Li‐Ming Tseng

Traditionally, file transfer protocol (FTP) servers are major archive providers and users apply the Archie server to locate FTP archives. With the extreme popularity of the WWW…

1143

Abstract

Traditionally, file transfer protocol (FTP) servers are major archive providers and users apply the Archie server to locate FTP archives. With the extreme popularity of the WWW, Web servers are now important archive providers and users download archives via HTTP. To reduce HTTP traffic, proxy cache servers are deployed on the Internet. However, we find the hit rate of archives in cache servers is quite low. This study proposes a combination of caching and better searching mechanisms to alleviate the problem. We enable a proxy server to automatically collect WWW and FTP archives from its cache, organize them in the form of an FTP directory, and then offer the directory list to the Archie. Accordingly, users can find archives on WWW and FTP servers through the Archie, and they can directly download archives from the proxy server, thus improving the reuse of cached archives. A system was implemented and operated in a real environment to evaluate the approach and results are discussed.

Details

Internet Research, vol. 13 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 1 June 1989

Edward Valauskas

Motorola, Intel new processors. Microprocessors are the core of all personal computers and workstations, literally the computational heart on which hardware and software…

Abstract

Motorola, Intel new processors. Microprocessors are the core of all personal computers and workstations, literally the computational heart on which hardware and software manufacturers depend to provide increasing power within ever shrinking packages. The two main microprocessor firms, Intel and Motorola, have accelerated the pace of processor design to the point that hardware and software takes months to fully take advantage of the new processor capabilities. At April's COMDEX/Spring 89 Conference in Chicago, Intel announced its newest extension of the 80x86 microprocessor line in the form of the i486. Motorola replied with its latest invention in the 680x0 family, the 68040 processor. Both chips check in with a million transistors or more, claim incredible calculation speeds, promise to be downwardly compatible with their slower relatives, and adopt a hybrid architecture.

Details

Library Workstation Report, vol. 6 no. 6
Type: Research Article
ISSN: 1041-7923

Article
Publication date: 1 August 2005

Jinbao Li, Yingshu Li, My T. Thai and Jianzhong Li

This paper investigates query processing in MANETs. Cache techniques and multi‐join database operations are studied. For data caching, a group‐caching strategy is proposed. Using…

Abstract

This paper investigates query processing in MANETs. Cache techniques and multi‐join database operations are studied. For data caching, a group‐caching strategy is proposed. Using the cache and the index of the cached data, queries can be processed at a single node or within the group containing this single node. For multi‐join, a cost evaluation model and a query plan generation algorithm are presented. Query cost is evaluated based on the parameters including the size of the transmitted data, the transmission distance and the query cost at each single node. According to the evaluations, the nodes on which the query should be executed and the join order are determined. Theoretical analysis and experiment results show that the proposed group‐caching based query processing and the cost based join strategy are efficient in MANETs. It is suitable for the mobility, the disconnection and the multi‐hop features of MANETs. The communication cost between nodes is reduced and the efficiency of the query is improved greatly.

Details

International Journal of Pervasive Computing and Communications, vol. 1 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 30 October 2007

Glenn R. Luecke, Ying Li and Martin Cuma

The purpose of this paper is to evaluate how to use nodes in a cluster efficiently by studying the NAS Parallel Benchmarks (NASPB) on Intel Xeon and AMD Opteron dual CPU Linux…

Abstract

Purpose

The purpose of this paper is to evaluate how to use nodes in a cluster efficiently by studying the NAS Parallel Benchmarks (NASPB) on Intel Xeon and AMD Opteron dual CPU Linux clusters.

Design/methodology/approach

The performance results of the NASPB are presented both with one MPI process per node (1 ppn) and with two MPI processes per node (2 ppn). These benchmark results were analyzed by considering the impact of cache effects, code scalability, memory bandwidth within nodes, and the impact of MPI and the MPI communication network. Memory bandwidth was benchmarked using MPI versions of the Streams benchmarks. The impact of MPI and the MPI communication network are evaluated by benchmarking the performance of MPI sends and receives, MPI broadcast, and the MPI all‐to‐all routines.

Findings

The performance results from running the NASPB and from the memory bandwidth benchmarks show that better performance can sometimes be achieved using 1 ppn. Performance results show that the AMD Opteron/Myrinet cluster is able to achieve significantly better utilization of the second processor than the Intel Xeon/Myrinet cluster.

Practical implications

Most Linux clusters are purchased with two processors per node. One would like to run all applications on a cluster with two processors per node using 2 ppn instead of 1 ppn in order to utilize the second processor on each node. However, our results show that this is not always the best choice. Users should always assess their program performance with both 1 ppn and 2 ppn before running production calculations. This issue becomes even more important with the emergence of multi‐core processors.

Originality/value

To the authors' best knowledge, this is the only detailed comparison of AMD Opteron and Intel Xeon dual processor node parallel performance on large Myrinet clusters. The paper should be of value to everybody considering running on or purchasing AMD or Intel‐based Linux cluster.

Details

Benchmarking: An International Journal, vol. 14 no. 6
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 5 September 2016

G. Ramani and K. Geetha

Memory plays a vital role in designing embedded systems. A larger memory can accommodate more and larger applications but increases cost area, as well as energy requirements…

Abstract

Purpose

Memory plays a vital role in designing embedded systems. A larger memory can accommodate more and larger applications but increases cost area, as well as energy requirements. Hence, the purpose of this paper is to propose code compression techniques to solve this issue by minimizing the code size of the application program by compressing the instructions with higher static frequency.

Design/methodology/approach

The idea is based on the static and dynamic frequency-based algorithm combined with bit mask and dictionary-based algorithm for MIPS32 processor, in order to minimize the code size and improves compression ratio.

Findings

The experimental result shows that the proposed system achieves up to 67 percent compression efficiency.

Originality/value

The paper presents enhanced versions of the code compression technique.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 35 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of over 1000