Search results

1 – 10 of over 6000
To view the access options for this content please click here
Article
Publication date: 23 November 2010

Nils Hoeller, Christoph Reinke, Jana Neumann, Sven Groppe, Christian Werner and Volker Linnemann

In the last decade, XML has become the de facto standard for data exchange in the world wide web (WWW). The positive benefits of data exchangeability to support system and…

Abstract

Purpose

In the last decade, XML has become the de facto standard for data exchange in the world wide web (WWW). The positive benefits of data exchangeability to support system and software heterogeneity on application level and easy WWW integration make XML an ideal data format for many other application and network scenarios like wireless sensor networks (WSNs). Moreover, the usage of XML encourages using standardized techniques like SOAP to adapt the service‐oriented paradigm to sensor network engineering. Nevertheless, integrating XML usage in WSN data management is limited by the low hardware resources that require efficient XML data management strategies suitable to bridge the general resource gap. The purpose of this paper is to present two separate strategies on integrating XML data management in WSNs.

Design/methodology/approach

The paper presents two separate strategies on integrating XML data management in WSNs that have been implemented and are running on today's sensor node platforms. The paper shows how XML data can be processed and how XPath queries can be evaluated dynamically. In an extended evaluation, the performance of both strategies concerning the memory and energy efficiency are compared and both solutions are shown to have application domains fully applicable on today's sensor node products.

Findings

This work shows that dynamic XML data management and query evaluation is possible on sensor nodes with strict limitations in terms of memory, processing power and energy supply.

Originality/value

The paper presents an optimized stream‐based XML compression technique and shows how XML queries can be evaluated on compressed XML bit streams using generic pushdown automata. To the best of the authors' knowledge, this is the first complete approach on integrating dynamic XML data management into WSNs.

Details

International Journal of Web Information Systems, vol. 6 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 3 October 2016

Yair Wiseman

The purpose of this paper is to study extensive enlargement and safety of flight data recorder memory.

Abstract

Purpose

The purpose of this paper is to study extensive enlargement and safety of flight data recorder memory.

Design/methodology/approach

The study involves the moving the memory of flight data recorders from an internal embedded device to a cloud.

Findings

The implementation has made the embedded memory device of flight data recorder effectively unlimited, and, hence, much more information can be stored.

Research limitations/implications

The possibility of a flight data recorder to be damaged or lost in a crash is not so high, but the implementation can be very helpful in cases such as aerial disappearances.

Practical implications

The implication is larger and protected memory for flight data recorders.

Social implications

Finding reasons for crashes is faster, and immediate actions can be taken to find remedy to the failures.

Originality/value

The use of internet and cellphones in airplanes is nothing special at present. It is suggested to take this technology for flight data recorders as well.

Details

Aircraft Engineering and Aerospace Technology, vol. 88 no. 6
Type: Research Article
ISSN: 1748-8842

Keywords

To view the access options for this content please click here
Article
Publication date: 13 March 2007

B. Pradhan, K. Sandeep, Shattri Mansor, Abdul Rahman Ramli and Abdul Rashid B. Mohamed Sharif

In GIS applications for a realistic representation of a terrain a great number of triangles are needed that ultimately increases the data size. For online GIS interactive…

Abstract

Purpose

In GIS applications for a realistic representation of a terrain a great number of triangles are needed that ultimately increases the data size. For online GIS interactive programs it has become highly essential to reduce the number of triangles in order to save more storing space. Therefore, there is need to visualize terrains at different levels of detail, for example, a region of high interest should be in higher resolution than a region of low or no interest. Wavelet technology provides an efficient approach to achieve this. Using this technology, one can decompose a terrain data into hierarchy. On the other hand, the reduction of the number of triangles in subsequent levels should not be too small; otherwise leading to poor representation of terrain.

Design/methodology/approach

This paper proposes a new computational code (please see Appendix for the flow chart and pseudo code) for triangulated irregular network (TIN) using Delaunay triangulation methods. The algorithms have proved to be efficient tools in numerical methods such as finite element method and image processing. Further, second generation wavelet techniques popularly known as “lifting schemes” have been applied to compress the TIN data.

Findings

A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub‐triangles and the elevation step has been used to “modify” the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets.

Originality/value

A new algorithm for second generation wavelet compression has been proposed for TIN data compression. The quality of geographical surface representation after using proposed technique is compared with the original terrain. The results show that this method can be used for significant reduction of data set.

Details

Engineering Computations, vol. 24 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 1 August 1999

Schubert Foo Siu Cheung Hui and See Wai Yip

The Internet environment, with its packet‐switched network and lack of resource reservation mechanisms, has made the delivery of low bit‐rate real‐time communication…

Abstract

The Internet environment, with its packet‐switched network and lack of resource reservation mechanisms, has made the delivery of low bit‐rate real‐time communication services particularly difficult and challenging. The high potential transmission delay and data packet loss under varying network conditions will lead to unpleasant and unintelligible audio and jerky video play‐out. The Internet TCP/IP protocol suite can be extended with new mechanisms in an attempt to tackle such problems. In this research, an integrated transmission mechanism that incorporates a number of existing techniques to enhance the quality and deliver “acceptable” real‐time services is proposed. These techniques include the use of data compression, data buffering, dynamic rate control, packet lost replacement, silence deletion and virtual video play‐out mechanism. The proposed transmission mechanism is designed as a generic communication system so that it can be used in different systems and conditions. This approach has been successfully implemented and demonstrated using three separate systems that include the Internet Phone, WebVideo and video‐conferencing tool.

Details

Internet Research, vol. 9 no. 3
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 20 December 2007

Chuanfeng Lv and Qiangfu Zhao

In recent years, principal component analysis (PCA) has attracted great attention in dimension reduction. However, since a very large transformation matrix must be used…

Abstract

Purpose

In recent years, principal component analysis (PCA) has attracted great attention in dimension reduction. However, since a very large transformation matrix must be used for reconstructing the original data, PCA has not been successfully applied to image compression. To solve this problem, this paper aims to propose a new technique called k‐PCA.

Design/methodology/approach

Actually, k‐PCA is a combination of vector quantization (VQ) and PCA. The basic idea is to divide the problem space into k clusters using VQ, and then find a PCA encoder for each cluster. The point is that if the k‐PCA encoder is obtained using data containing enough information, it can be used as a semi‐universal encoder to compress all images in a given domain.

Findings

Although a k‐PCA encoder is more complex than a single PCA encoder, the compression ratio can be much higher because the transformation matrices can be excluded from the encoded data. The performance of the k‐PCA encoder can be improved further through learning. For this purpose, this paper‐proposes an extended LBG algorithm.

Originality/value

The effectiveness of the k‐PCA is demonstrated through experiments with several well‐known test images.

Details

International Journal of Pervasive Computing and Communications, vol. 3 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Content available
Article
Publication date: 29 July 2020

Mahmood Al-khassaweneh and Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks;…

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

To view the access options for this content please click here
Article
Publication date: 1 April 1974

Susan E. Creasey, Michael F. Lynch and J. Howard Petrie

The application of a variable to fixed‐length compression coding technique to two bibliographic data bases (MARC and INSPEC) is described. By appropriate transformation of…

Abstract

The application of a variable to fixed‐length compression coding technique to two bibliographic data bases (MARC and INSPEC) is described. By appropriate transformation of characters or digrams into bit patterns reflecting more accurately the distributions of characters in the data bases, and application of the encoding process, varying degrees of compression can be obtained.

Details

Program, vol. 8 no. 4
Type: Research Article
ISSN: 0033-0337

To view the access options for this content please click here
Article
Publication date: 18 October 2021

Anilkumar Chandrashekhar Korishetti and Virendra S. Malemath

High-efficiency video coding (HEVC) is the latest video coding standard that has better coding efficiency than the H.264/advanced video coding (AVC) standard. The purpose…

Abstract

Purpose

High-efficiency video coding (HEVC) is the latest video coding standard that has better coding efficiency than the H.264/advanced video coding (AVC) standard. The purpose of this paper is to design and develop an effective block search mechanism for the video compression-HEVC standard such that the developed compression standard is applied for the communication applications.

Design/methodology/approach

In the proposed method, an rate-distortion (RD) trade-off, named regressive RD trade-off is used based on the conditional autoregressive value at risk (CaViar) model. The motion estimation (ME) is based on the new block search mechanism, which is developed with the modification in the Ordered Tree-based Hex-Octagon (OrTHO)-search algorithm along with the chronological Salp swarm algorithm (SSA) based on deep recurrent neural network (deepRNN) for optimally deciding the shape of search, search length of the tree and dimension. The chronological SSA is developed by integrating the chronological concept in SSA, which is used for training the deep RNN for ME.

Findings

The competing methods used for the comparative analysis of the proposed OrTHO-search based RD + chronological-salp swarm algorithm (RD + C-SSA) based deep RNN are support vector machine (SVM), fast encoding framework, wavefront-based high parallel (WHP) and OrTHO-search based RD method. The proposed video compression method obtained a maximum peak signal-to-noise ratio (PSNR) of 42.9180 dB and a maximum structural similarity index measure (SSIM) of 0.9827.

Originality/value

In this research, an effective block search mechanism was developed with the modification in the OrTHO-search algorithm along with the chronological SSA based on deepRNN for the video compression-HEVC standard.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

To view the access options for this content please click here
Article
Publication date: 29 January 2018

Wasim Ahmad Bhat

The purpose of this paper is to investigate the prospects of current storage technologies for long-term preservation of big data in digital libraries.

Downloads
2333

Abstract

Purpose

The purpose of this paper is to investigate the prospects of current storage technologies for long-term preservation of big data in digital libraries.

Design/methodology/approach

The study employs a systematic and critical review of the relevant literature to explore the prospects of current storage technologies for long-term preservation of big data in digital libraries. Online computer databases were searched to identify the relevant literature published between 2000 and 2016. A specific inclusion and exclusion criterion was formulated and applied in two distinct rounds to determine the most relevant papers.

Findings

The study concludes that the current storage technologies are not viable for long-term preservation of big data in digital libraries. They can neither fulfil all the storage demands nor alleviate the financial expenditures of digital libraries. The study also points out that migrating to emerging storage technologies in digital libraries is a long-term viable solution.

Research limitations/implications

The study suggests that continuous innovation and research efforts in current storage technologies are required to lessen the impact of storage shortage on digital libraries, and to allow emerging storage technologies to advance further and take over. At the same time, more aggressive research and development efforts are required by academics and industry to further advance the emerging storage technologies for their timely and swift adoption by digital libraries.

Practical implications

The study reveals that digital libraries, besides incurring significant financial expenditures, will suffer from potential loss of information due to storage shortage for long-term preservation of big data, if current storage technologies are employed by them. Therefore, policy makers and practitioners should meticulously choose storage technologies for long-term preservation of big data in digital libraries.

Originality/value

This type of holistic study that investigates the prospects of magnetic drive technology, solid-state drive technology, and data-reduction techniques for long-term preservation of big data in digital libraries has not been conducted in the field previously, and so provides a novel contribution. The study arms academics, practitioners, policy makers, and industry with the deep understanding of the problem, technical details to choose storage technologies meticulously, greater insight to frame sustainable policies, and opportunities to address various research problems.

Details

Library Hi Tech, vol. 36 no. 3
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Article
Publication date: 1 June 1991

Howard Falk

New developments in tape backup include the use of Digital Audio Tape (DAT) and data compression techniques but, before we take a look at these new features — and some of…

Abstract

New developments in tape backup include the use of Digital Audio Tape (DAT) and data compression techniques but, before we take a look at these new features — and some of the available tape units that make use of them — a discussion of tape backup itself seems appropriate. Readers who are thoroughly familiar with tape backup can skip directly to the paragraph headed ‘Data compression for low cost units’.

Details

The Electronic Library, vol. 9 no. 6
Type: Research Article
ISSN: 0264-0473

1 – 10 of over 6000