Search results

1 – 10 of over 7000
Article
Publication date: 25 July 2022

Sravanthi Chutke, Nandhitha N.M. and Praveen Kumar Lendale

With the advent of technology, a huge amount of data is being transmitted and received through the internet. Large bandwidth and storage are required for the exchange of data and…

Abstract

Purpose

With the advent of technology, a huge amount of data is being transmitted and received through the internet. Large bandwidth and storage are required for the exchange of data and storage, respectively. Hence, compression of the data which is to be transmitted over the channel is unavoidable. The main purpose of the proposed system is to use the bandwidth effectively. The videos are compressed at the transmitter’s end and reconstructed at the receiver’s end. Compression techniques even help for smaller storage requirements.

Design/methodology/approach

The paper proposes a novel compression technique for three-dimensional (3D) videos using a zig-zag 3D discrete cosine transform. The method operates a 3D discrete cosine transform on the videos, followed by a zig-zag scanning process. Finally, to convert the data into a single bit stream for transmission, a run-length encoding technique is used. The videos are reconstructed by using the inverse 3D discrete cosine transform, inverse zig-zag scanning (quantization) and inverse run length coding techniques. The proposed method is simple and reduces the complexity of the convolutional techniques.

Findings

Coding reduction, code word reduction, peak signal to noise ratio (PSNR), mean square error, compression percent and compression ratio values are calculated, and the dominance of the proposed method over the convolutional methods is seen.

Originality/value

With zig-zag quantization and run length encoding using 3D discrete cosine transform for 3D video compression, gives compression up to 90% with a PSNR of 41.98 dB. The proposed method can be used in multimedia applications where bandwidth, storage and data expenses are the major issues.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Open Access
Article
Publication date: 28 June 2022

Olli Väänänen and Timo Hämäläinen

Minimizing the energy consumption in a wireless sensor node is important for lengthening the lifetime of a battery. Radio transmission is the most energy-consuming task in a…

995

Abstract

Purpose

Minimizing the energy consumption in a wireless sensor node is important for lengthening the lifetime of a battery. Radio transmission is the most energy-consuming task in a wireless sensor node, and by compressing the sensor data in the online mode, it is possible to reduce the number of transmission periods. This study aims to demonstrate that temporal compression methods present an effective method for lengthening the lifetime of a battery-powered wireless sensor node.

Design/methodology/approach

In this study, the energy consumption of LoRa-based sensor node was evaluated and measured. The experiments were conducted with different LoRaWAN data rate parameters, with and without compression algorithms implemented to compress sensor data in the online mode. The effect of temporal compression algorithms on the overall energy consumption was measured.

Findings

Energy consumption was measured with different LoRaWAN spreading factors. The LoRaWAN transmission energy consumption significantly depends on the spreading factor used. The other significant factors affecting the LoRa-based sensor node energy consumption are the measurement interval and sleep mode current consumption. The results show that temporal compression algorithms are an effective method for reducing the energy consumption of a LoRa sensor node by reducing the number of LoRa transmission periods.

Originality/value

This paper presents with a practical case that it is possible to reduce the overall energy consumption of a wireless sensor node by compressing sensor data in online mode with simple temporal compression algorithms.

Details

Sensor Review, vol. 42 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 23 November 2010

Nils Hoeller, Christoph Reinke, Jana Neumann, Sven Groppe, Christian Werner and Volker Linnemann

In the last decade, XML has become the de facto standard for data exchange in the world wide web (WWW). The positive benefits of data exchangeability to support system and…

Abstract

Purpose

In the last decade, XML has become the de facto standard for data exchange in the world wide web (WWW). The positive benefits of data exchangeability to support system and software heterogeneity on application level and easy WWW integration make XML an ideal data format for many other application and network scenarios like wireless sensor networks (WSNs). Moreover, the usage of XML encourages using standardized techniques like SOAP to adapt the service‐oriented paradigm to sensor network engineering. Nevertheless, integrating XML usage in WSN data management is limited by the low hardware resources that require efficient XML data management strategies suitable to bridge the general resource gap. The purpose of this paper is to present two separate strategies on integrating XML data management in WSNs.

Design/methodology/approach

The paper presents two separate strategies on integrating XML data management in WSNs that have been implemented and are running on today's sensor node platforms. The paper shows how XML data can be processed and how XPath queries can be evaluated dynamically. In an extended evaluation, the performance of both strategies concerning the memory and energy efficiency are compared and both solutions are shown to have application domains fully applicable on today's sensor node products.

Findings

This work shows that dynamic XML data management and query evaluation is possible on sensor nodes with strict limitations in terms of memory, processing power and energy supply.

Originality/value

The paper presents an optimized stream‐based XML compression technique and shows how XML queries can be evaluated on compressed XML bit streams using generic pushdown automata. To the best of the authors' knowledge, this is the first complete approach on integrating dynamic XML data management into WSNs.

Details

International Journal of Web Information Systems, vol. 6 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 3 October 2016

Yair Wiseman

The purpose of this paper is to study extensive enlargement and safety of flight data recorder memory.

Abstract

Purpose

The purpose of this paper is to study extensive enlargement and safety of flight data recorder memory.

Design/methodology/approach

The study involves the moving the memory of flight data recorders from an internal embedded device to a cloud.

Findings

The implementation has made the embedded memory device of flight data recorder effectively unlimited, and, hence, much more information can be stored.

Research limitations/implications

The possibility of a flight data recorder to be damaged or lost in a crash is not so high, but the implementation can be very helpful in cases such as aerial disappearances.

Practical implications

The implication is larger and protected memory for flight data recorders.

Social implications

Finding reasons for crashes is faster, and immediate actions can be taken to find remedy to the failures.

Originality/value

The use of internet and cellphones in airplanes is nothing special at present. It is suggested to take this technology for flight data recorders as well.

Details

Aircraft Engineering and Aerospace Technology, vol. 88 no. 6
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 13 March 2007

B. Pradhan, K. Sandeep, Shattri Mansor, Abdul Rahman Ramli and Abdul Rashid B. Mohamed Sharif

In GIS applications for a realistic representation of a terrain a great number of triangles are needed that ultimately increases the data size. For online GIS interactive programs…

Abstract

Purpose

In GIS applications for a realistic representation of a terrain a great number of triangles are needed that ultimately increases the data size. For online GIS interactive programs it has become highly essential to reduce the number of triangles in order to save more storing space. Therefore, there is need to visualize terrains at different levels of detail, for example, a region of high interest should be in higher resolution than a region of low or no interest. Wavelet technology provides an efficient approach to achieve this. Using this technology, one can decompose a terrain data into hierarchy. On the other hand, the reduction of the number of triangles in subsequent levels should not be too small; otherwise leading to poor representation of terrain.

Design/methodology/approach

This paper proposes a new computational code (please see Appendix for the flow chart and pseudo code) for triangulated irregular network (TIN) using Delaunay triangulation methods. The algorithms have proved to be efficient tools in numerical methods such as finite element method and image processing. Further, second generation wavelet techniques popularly known as “lifting schemes” have been applied to compress the TIN data.

Findings

A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub‐triangles and the elevation step has been used to “modify” the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets.

Originality/value

A new algorithm for second generation wavelet compression has been proposed for TIN data compression. The quality of geographical surface representation after using proposed technique is compared with the original terrain. The results show that this method can be used for significant reduction of data set.

Details

Engineering Computations, vol. 24 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 August 1999

Schubert Foo Siu Cheung Hui and See Wai Yip

The Internet environment, with its packet‐switched network and lack of resource reservation mechanisms, has made the delivery of low bit‐rate real‐time communication services…

Abstract

The Internet environment, with its packet‐switched network and lack of resource reservation mechanisms, has made the delivery of low bit‐rate real‐time communication services particularly difficult and challenging. The high potential transmission delay and data packet loss under varying network conditions will lead to unpleasant and unintelligible audio and jerky video play‐out. The Internet TCP/IP protocol suite can be extended with new mechanisms in an attempt to tackle such problems. In this research, an integrated transmission mechanism that incorporates a number of existing techniques to enhance the quality and deliver “acceptable” real‐time services is proposed. These techniques include the use of data compression, data buffering, dynamic rate control, packet lost replacement, silence deletion and virtual video play‐out mechanism. The proposed transmission mechanism is designed as a generic communication system so that it can be used in different systems and conditions. This approach has been successfully implemented and demonstrated using three separate systems that include the Internet Phone, WebVideo and video‐conferencing tool.

Details

Internet Research, vol. 9 no. 3
Type: Research Article
ISSN: 1066-2243

Keywords

Open Access
Article
Publication date: 29 July 2020

Mahmood Al-khassaweneh and Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including…

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 20 December 2007

Chuanfeng Lv and Qiangfu Zhao

In recent years, principal component analysis (PCA) has attracted great attention in dimension reduction. However, since a very large transformation matrix must be used for…

Abstract

Purpose

In recent years, principal component analysis (PCA) has attracted great attention in dimension reduction. However, since a very large transformation matrix must be used for reconstructing the original data, PCA has not been successfully applied to image compression. To solve this problem, this paper aims to propose a new technique called k‐PCA.

Design/methodology/approach

Actually, k‐PCA is a combination of vector quantization (VQ) and PCA. The basic idea is to divide the problem space into k clusters using VQ, and then find a PCA encoder for each cluster. The point is that if the k‐PCA encoder is obtained using data containing enough information, it can be used as a semi‐universal encoder to compress all images in a given domain.

Findings

Although a k‐PCA encoder is more complex than a single PCA encoder, the compression ratio can be much higher because the transformation matrices can be excluded from the encoded data. The performance of the k‐PCA encoder can be improved further through learning. For this purpose, this paper‐proposes an extended LBG algorithm.

Originality/value

The effectiveness of the k‐PCA is demonstrated through experiments with several well‐known test images.

Details

International Journal of Pervasive Computing and Communications, vol. 3 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 April 1974

Susan E. Creasey, Michael F. Lynch and J. Howard Petrie

The application of a variable to fixed‐length compression coding technique to two bibliographic data bases (MARC and INSPEC) is described. By appropriate transformation of…

Abstract

The application of a variable to fixed‐length compression coding technique to two bibliographic data bases (MARC and INSPEC) is described. By appropriate transformation of characters or digrams into bit patterns reflecting more accurately the distributions of characters in the data bases, and application of the encoding process, varying degrees of compression can be obtained.

Details

Program, vol. 8 no. 4
Type: Research Article
ISSN: 0033-0337

Article
Publication date: 18 October 2021

Anilkumar Chandrashekhar Korishetti and Virendra S. Malemath

High-efficiency video coding (HEVC) is the latest video coding standard that has better coding efficiency than the H.264/advanced video coding (AVC) standard. The purpose of this…

Abstract

Purpose

High-efficiency video coding (HEVC) is the latest video coding standard that has better coding efficiency than the H.264/advanced video coding (AVC) standard. The purpose of this paper is to design and develop an effective block search mechanism for the video compression-HEVC standard such that the developed compression standard is applied for the communication applications.

Design/methodology/approach

In the proposed method, an rate-distortion (RD) trade-off, named regressive RD trade-off is used based on the conditional autoregressive value at risk (CaViar) model. The motion estimation (ME) is based on the new block search mechanism, which is developed with the modification in the Ordered Tree-based Hex-Octagon (OrTHO)-search algorithm along with the chronological Salp swarm algorithm (SSA) based on deep recurrent neural network (deepRNN) for optimally deciding the shape of search, search length of the tree and dimension. The chronological SSA is developed by integrating the chronological concept in SSA, which is used for training the deep RNN for ME.

Findings

The competing methods used for the comparative analysis of the proposed OrTHO-search based RD + chronological-salp swarm algorithm (RD + C-SSA) based deep RNN are support vector machine (SVM), fast encoding framework, wavefront-based high parallel (WHP) and OrTHO-search based RD method. The proposed video compression method obtained a maximum peak signal-to-noise ratio (PSNR) of 42.9180 dB and a maximum structural similarity index measure (SSIM) of 0.9827.

Originality/value

In this research, an effective block search mechanism was developed with the modification in the OrTHO-search algorithm along with the chronological SSA based on deepRNN for the video compression-HEVC standard.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

1 – 10 of over 7000