Search results

1 – 10 of 901
Article
Publication date: 1 August 2005

Jinbao Li, Yingshu Li, My T. Thai and Jianzhong Li

This paper investigates query processing in MANETs. Cache techniques and multi‐join database operations are studied. For data caching, a group‐caching strategy is…

Abstract

This paper investigates query processing in MANETs. Cache techniques and multi‐join database operations are studied. For data caching, a group‐caching strategy is proposed. Using the cache and the index of the cached data, queries can be processed at a single node or within the group containing this single node. For multi‐join, a cost evaluation model and a query plan generation algorithm are presented. Query cost is evaluated based on the parameters including the size of the transmitted data, the transmission distance and the query cost at each single node. According to the evaluations, the nodes on which the query should be executed and the join order are determined. Theoretical analysis and experiment results show that the proposed group‐caching based query processing and the cost based join strategy are efficient in MANETs. It is suitable for the mobility, the disconnection and the multi‐hop features of MANETs. The communication cost between nodes is reduced and the efficiency of the query is improved greatly.

Details

International Journal of Pervasive Computing and Communications, vol. 1 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 19 June 2017

Tsuyoshi Donen, Shingo Otsubo, Ryo Nishide, Ian Piumarta and Hideyuki Takada

The purpose of this study is to reduce internet traffic when performing collaborative Web search. Mobile terminals are now in widespread use and people are increasingly…

1512

Abstract

Purpose

The purpose of this study is to reduce internet traffic when performing collaborative Web search. Mobile terminals are now in widespread use and people are increasingly using them for collaborative Web search to achieve a common goal. When performing such searches, the authors want to reduce internet traffic as much as possible, for example, to avoid bandwidth throttling that occurs when data usage exceeds a certain quota.

Design/methodology/approach

To reduce internet traffic, the authors use a proxy system based on the peer cache mechanism. The proxy shares Web content stored on mobile terminals participating in an ad hoc Bluetooth network, focusing on content that is accessed multiple times from different terminals. Evaluation of the proxy’s effectiveness was measured using experiments designed to replicate realistic usage scenarios.

Findings

Experimental results show that the proxy reduces internet traffic by approximately 20 per cent when four people collaboratively search the Web to find good restaurants for a social event.

Originality/value

Unlike previous work on co-operative Web proxies, the authors study a form of collaborative Web caching between mobile devices within an ad hoc Bluetooth network created specifically for the purpose of sharing cached content, acting orthogonally to (and independently of) traditional hierarchical Web caching.

Details

International Journal of Web Information Systems, vol. 13 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 January 1993

Frank Kappe, Gerald Pani and Florian Schnabel

Proposes how a global Hypermedia system could evolve from the“Hyper‐G” system, currently being developed at GrazUniversity. Overviews the development of Internet from…

Abstract

Proposes how a global Hypermedia system could evolve from the “Hyper‐G” system, currently being developed at Graz University. Overviews the development of Internet from post‐war visionary speculation to present. Explains some limitations of the current architecture (World Wide Web, Gopher, WAIS). Presents in detail the architecture of a massively distributed hypermedia system. Explains how the proposed system would be much faster, overcoming such problems as documents being oriented by location rather than by subject. Suggests possible applications.

Details

Internet Research, vol. 3 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 1 August 2016

Bao-Rong Chang, Hsiu-Fen Tsai, Yun-Che Tsai, Chin-Fu Kuo and Chi-Chung Chen

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high…

Abstract

Purpose

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big data environment.

Design/methodology/approach

First, the integration of Apache Hive, Cloudera Impala and BDAS Shark make the platform support SQL-like query. Next, users can access a single interface and select the best performance of big data warehouse platform automatically by the proposed optimizer. Finally, the distributed memory storage system Memcached incorporated into the distributed file system, Apache HDFS, is employed for fast caching query results. Therefore, if users query the same SQL command, the same result responds rapidly from the cache system instead of suffering the repeated searches in a big data warehouse and taking a longer time to retrieve.

Findings

As a result the proposed approach significantly improves the overall performance and dramatically reduces the search time as querying a database, especially applying for the high-repeatable SQL commands under multi-user mode.

Research limitations/implications

Currently, Shark’s latest stable version 0.9.1 does not support the latest versions of Spark and Hive. In addition, this series of software only supports Oracle JDK7. Using Oracle JDK8 or Open JDK will cause serious errors, and some software will be unable to run.

Practical implications

The problem with this system is that some blocks are missing when too many blocks are stored in one result (about 100,000 records). Another problem is that the sequential writing into In-memory cache wastes time.

Originality/value

When the remaining memory capacity is 2 GB or less on each server, Impala and Shark will have a lot of page swapping, causing extremely low performance. When the data scale is larger, it may cause the JVM I/O exception and make the program crash. However, when the remaining memory capacity is sufficient, Shark is faster than Hive and Impala. Impala’s consumption of memory resources is between those of Shark and Hive. This amount of remaining memory is sufficient for Impala’s maximum performance. In this study, each server allocates 20 GB of memory for cluster computing and sets the amount of remaining memory as Level 1: 3 percent (0.6 GB), Level 2: 15 percent (3 GB) and Level 3: 75 percent (15 GB) as the critical points. The program automatically selects Hive when memory is less than 15 percent, Impala at 15 to 75 percent and Shark at more than 75 percent.

Article
Publication date: 1 May 2005

Hung‐Chang Hsiao, Chung‐Ta King and Shih‐Yen Gao

Resource discovery in peer‐to‐peer (P2P) systems have been extensively studied. Unfortunately, most of the systems studied are not designed to take advantage of the…

Abstract

Resource discovery in peer‐to‐peer (P2P) systems have been extensively studied. Unfortunately, most of the systems studied are not designed to take advantage of the heterogeneity in peer nodes. In this paper, we propose a novel P2P overlay called RATTAN, which serves as an underlay of a Gnutella‐like network. RATTAN exploits the heterogeneity of peer nodes by structuring capable nodes as the core of the overlay. Using a tree‐like structure, RATTAN can maximize the search scope with a minimal number of query messages. We evaluate RATTAN with simulation. The experiments show the following interesting results. First, RATTAN is robust by exploiting redundant overlay links. Second, the maximum bandwidth demand for processing the protocol of a single RATTAN overlay is nearly 1M bits/sec. However, around 80% of the nodes merely take 66 bits/sec. One implication is that we can use a small number of relatively capable peers (e.g., stable machines with a 100M bits/sec network interface) to process the 1M bits/sec protocol overhead and serve other peers that only need to spend 66 bits/sec for processing protocol overhead.

Details

International Journal of Pervasive Computing and Communications, vol. 1 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 December 1995

David Flater and Yelena Yesha

Provides a new answer to the resource discovery problem, which arises because although the Internet makes it possible for users to retrieve enormous amounts of…

Abstract

Provides a new answer to the resource discovery problem, which arises because although the Internet makes it possible for users to retrieve enormous amounts of information, it provides insufficient support for locating the specific information that is needed. ALIBI (Adaptive Location of Internetworked Bases of Information) is a new tool that succeeds in locating information without the use of centralized resource catalogs, navigation, or costly searching. Its powerful query‐based interface eliminates the need for the user to connect to one network site after another to find information or to wrestle with overloaded centralized catalogs and archives. This functionality was made possible by an assortment of significant new algorithms and techniques, including classification‐based query routing, fully distributed cooperative caching, and a query language that combines the practicality of Boolean logic with the expressive power of text retrieval. The resulting information system is capable of providing fully automatic resource discovery and retrieval access to a limitless variety of information bases.

Details

Internet Research, vol. 5 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 6 May 2014

Nan Zhang, Timo Smura, Björn Grönvall and Heikki Hämmäinen

The purpose of this paper is to identify and analyze the key uncertainties and to construct alternative future scenarios for Internet content delivery. The relative…

Abstract

Purpose

The purpose of this paper is to identify and analyze the key uncertainties and to construct alternative future scenarios for Internet content delivery. The relative positions and roles of different actors and content delivery technologies in each scenario are then discussed. As traffic volume rapidly grows, the current Internet architecture faces scalability issues. To meet the demand, technical solutions utilizing caching and name-based routing are developed.

Design/methodology/approach

This work followed a scenario planning process, and two workshops were organized for identifying the key trends and uncertainties. Industry architecture notation was used to systematically illustrate and compare the constructed scenarios.

Findings

Of the 94 forces identified, the revenue model and Internet service provider's (ISP’s) role in content provision were singled out as the two most important uncertainties, upon which four scenarios were constructed. In-network caching technologies are strong candidates in ISP-dominated scenarios. Content delivery networks are more likely outcomes in scenarios, where content providers’ role is significant.

Research limitations/implications

The paper focuses on qualitative analysis of scenarios. Utilizing, for instance, system dynamics to model interdependencies between the trends and uncertainties could provide a path toward quantitative analysis.

Originality/value

The paper increases understanding of relative positions and roles of different actors and technologies in possible future scenarios. The findings are important, especially for ISPs, content providers and technology vendors. The scenarios can be used to identify desirable futures and strategies to achieve them and to make informed choices in technology design to meet the demands of key actors.

Article
Publication date: 6 October 2020

Mulki Indana Zulfa, Rudy Hartanto and Adhistya Erna Permanasari

Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web…

Abstract

Purpose

Internet users and Web-based applications continue to grow every day. The response time on a Web application really determines the convenience of its users. Caching Web content is one strategy that can be used to speed up response time. This strategy is divided into three main techniques, namely, Web caching, Web prefetching and application-level caching. The purpose of this paper is to put forward a literature review of caching strategy research that can be used in Web-based applications.

Design/methodology/approach

The methods used in this paper were as follows: determined the review method, conducted a review process, pros and cons analysis and explained conclusions. The review method is carried out by searching literature from leading journals and conferences. The first search process starts by determining keywords related to caching strategies. To limit the latest literature in accordance with current developments in website technology, search results are limited to the past 10 years, in English only and related to computer science only.

Findings

Note in advance that Web caching and Web prefetching are slightly overlapping techniques because they have the same goal of reducing latency on the user’s side. But actually, the two techniques are motivated by different basic mechanisms. Web caching uses the basic mechanism of cache replacement or the algorithm to change cache objects in memory when the cache capacity is full, whereas Web prefetching uses the basic mechanism of predicting cache objects that can be accessed in the future. This paper also contributes practical guidelines for choosing the appropriate caching strategy for Web-based applications.

Originality/value

This paper conducts a state-of-the art review of caching strategies that can be used in Web applications. Exclusively, this paper presents taxonomy, pros and cons of selected research and discusses data sets that are often used in caching strategy research. This paper also provides another contribution, namely, practical instructions for Web developers to decide the caching strategy.

Details

International Journal of Web Information Systems, vol. 16 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 August 2000

Bert J. Dempsey

Every user of the World Wide Web understands why the WWW is often ridiculed as the World Wide Wait. The WWW and other applications on the Internet have been developed with…

Abstract

Every user of the World Wide Web understands why the WWW is often ridiculed as the World Wide Wait. The WWW and other applications on the Internet have been developed with a client‐server orientation that, in its simplest form, involves a centralized information repository to which users (clients) send requests. This single‐server model suffers from performance problems when clients are too numerous, when clients are physically far away in the Network, when the materials being delivered become very large and hence stress the wide‐area bandwidth, and when the information has a real‐time delivery component as with streaming audio and video materials. Engineering information delivery solutions that break the single‐site model has become an important aspect of next‐generation WWW delivery systems. Intends to help the information professional understand what new directions the delivery infrastructure of the WWW is taking and why these technical changes will impact users around the globe, especially in bandwidth‐poor areas of the Internet.

Details

The Electronic Library, vol. 18 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Content available
Article
Publication date: 1 March 2000

102

Abstract

Details

Assembly Automation, vol. 20 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of 901