Search results

1 – 10 of over 2000
Content available
Article

Mahmood Al-khassaweneh and Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks;…

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

To view the access options for this content please click here
Article

Hadi Grailu, Mojtaba Lotfizad and Hadi Sadoghi‐Yazdi

The purpose of this paper is to propose a lossy/lossless binary textual image compression method based on an improved pattern matching (PM) technique.

Abstract

Purpose

The purpose of this paper is to propose a lossy/lossless binary textual image compression method based on an improved pattern matching (PM) technique.

Design/methodology/approach

In the Farsi/Arabic script, contrary to the printed Latin script, letters usually attach together and produce various patterns. Hence, some patterns are fully or partially subsets of some others. Two new ideas are proposed here. First, the number of library prototypes is reduced by detecting and then removing the fully or partially similar prototypes. Second, a new effective pattern encoding scheme is proposed for all types of patterns including text and graphics. The new encoding scheme has two operation modes of chain coding and soft PM, depending on the ratio of the pattern area to its chain code effective length. In order to encode the number sequences, the authors have modified the multi‐symbol QM‐coder. The proposed method has three levels for the lossy compression. Each level, in its turn, further increases the compression ratio. The first level includes applying some processing in the chain code domain such as omission of small patterns and holes, omission of inner holes of characters, and smoothing the boundaries of the patterns. The second level includes the selective pixel reversal technique, and the third level includes using the proposed method of prioritizing the residual patterns for encoding, with respect to their degree of compactness.

Findings

Experimental results show that the compression performance of the proposed method is considerably better than that of the best existing binary textual image compression methods as high as 1.6‐3 times in the lossy case and 1.3‐2.4 times in the lossless case at 300 dpi. The maximum compression ratios are achieved for Farsi and Arabic textual images.

Research limitations/implications

Only the binary printed typeset textual images are considered.

Practical implications

The proposed method has a high‐compression ratio for archiving and storage applications.

Originality/value

To the authors' best knowledge, the existing textual image compression methods or standards have not so far exploited the property of full or partial similarity of prototypes for increasing the compression ratio for any scripts. Also, the idea of combining the boundary description methods with the run‐length and arithmetic coding techniques has not so far been used.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 2 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article

Zhifeng Wang, Chi Zuo and Chunyan Zeng

Recently, the double joint photographic experts group (JPEG) compression detection tasks have been paid much more attention in the field of Web image forensics. Although…

Abstract

Purpose

Recently, the double joint photographic experts group (JPEG) compression detection tasks have been paid much more attention in the field of Web image forensics. Although there are several useful methods proposed for double JPEG compression detection when the quantization matrices are different in the primary and secondary compression processes, it is still a difficult problem when the quantization matrices are the same. Moreover, those methods for the different or the same quantization matrices are implemented in independent ways. The paper aims to build a new unified framework for detecting the doubly JPEG compression.

Design/methodology/approach

First, the Y channel of JPEG images is cut into 8 × 8 nonoverlapping blocks, and two groups of features that characterize the artifacts caused by doubly JPEG compression with the same and the different quantization matrices are extracted on those blocks. Then, the Riemannian manifold learning is applied for dimensionality reduction while preserving the local intrinsic structure of the features. Finally, a deep stack autoencoder network with seven layers is designed to detect the doubly JPEG compression.

Findings

Experimental results with different quality factors have shown that the proposed approach performs much better than the state-of-the-art approaches.

Practical implications

To verify the integrity and authenticity of Web images, the research of double JPEG compression detection is increasingly paid more attentions.

Originality/value

This paper aims to propose a unified framework to detect the double JPEG compression in the scenario whether the quantization matrix is different or not, which means this approach can be applied in more practical Web forensics tasks.

Details

International Journal of Web Information Systems, vol. 17 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article

Gutembert Nganpet Nzeugaing and Elmarie Biermann

Research and application on the design, implementation and testing of an image compression system for a 3U CubeSat.

Abstract

Purpose

Research and application on the design, implementation and testing of an image compression system for a 3U CubeSat.

Design/methodology/approach

This paper is an intensive study on image compression technique, proposed design and approach on appropriate hardware for image compression on-board the CubeSats.

Findings

The paper reveals a method on improving image compression ration while maintaining the image quality unchanged. It also discusses about an appropriate hardware (world smallest super computer) for image compression on-board the CubeSats.

Originality/value

The study provides insight into image compression algorithm.

Details

Journal of Engineering, Design and Technology, vol. 14 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

To view the access options for this content please click here
Article

L‐K. Shark, X.Y. Lin, M.R. Varley, B.J. Matuszewski and J.P. Smith

This paper presents an efficient lossless compression method to reduce the storage requirement and transmission time for radiographic non‐destructive testing images of…

Abstract

This paper presents an efficient lossless compression method to reduce the storage requirement and transmission time for radiographic non‐destructive testing images of aircraft components. The method is based on a combination of predictive coding and the integer wavelet transform. By using the component CAD model to divide the radiographic image of aircraft components into different regions with each region having the same material structure, the parameters of the predictors and the choice of the integer wavelet transform are optimised according to the specific image features contained in each region. Using a real radiographic image of a practical aircraft component as an example, the proposed method is presented and shown to offer a significantly higher compression ratio than other lossless compression schemes currently available.

Details

Aircraft Engineering and Aerospace Technology, vol. 75 no. 4
Type: Research Article
ISSN: 0002-2667

Keywords

To view the access options for this content please click here
Article

Chuanfeng Lv and Qiangfu Zhao

In recent years, principal component analysis (PCA) has attracted great attention in dimension reduction. However, since a very large transformation matrix must be used…

Abstract

Purpose

In recent years, principal component analysis (PCA) has attracted great attention in dimension reduction. However, since a very large transformation matrix must be used for reconstructing the original data, PCA has not been successfully applied to image compression. To solve this problem, this paper aims to propose a new technique called k‐PCA.

Design/methodology/approach

Actually, k‐PCA is a combination of vector quantization (VQ) and PCA. The basic idea is to divide the problem space into k clusters using VQ, and then find a PCA encoder for each cluster. The point is that if the k‐PCA encoder is obtained using data containing enough information, it can be used as a semi‐universal encoder to compress all images in a given domain.

Findings

Although a k‐PCA encoder is more complex than a single PCA encoder, the compression ratio can be much higher because the transformation matrices can be excluded from the encoded data. The performance of the k‐PCA encoder can be improved further through learning. For this purpose, this paper‐proposes an extended LBG algorithm.

Originality/value

The effectiveness of the k‐PCA is demonstrated through experiments with several well‐known test images.

Details

International Journal of Pervasive Computing and Communications, vol. 3 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article

Yanling Wang

The purpose of this paper is to present an imperceptible and robust watermarking algorithm with high embedding capacity for digital images based on discrete wavelet…

Abstract

Purpose

The purpose of this paper is to present an imperceptible and robust watermarking algorithm with high embedding capacity for digital images based on discrete wavelet transform (DWT) domain.

Design/methodology/approach

First, the watermark image is scrambled using chaotic sequence and mapped to avoid the block effect after embedding watermark into the host image. Then, the scrambled watermark is inserted in LH2 and HL2 sub‐bands of the DWT of the host image to provide a good tradeoff between the transparency and the robustness of watermarks.

Findings

This paper presents experimental results and compares the results to other methods. It can be seen from the comparison that this method can obtain a better performance in many cases.

Originality/value

One of the main differences of this technique, compared to other wavelet watermarking techniques, is in the selection of the wavelet coefficients of the host image. When performing second level of the DWT, most methods in the current literature select the approximation sub‐band (LL2) to insert the watermark. The technique presented in this paper decomposes the image using DWT twice, and then obtains the significant coefficients (LH2 and HL2 sub‐bands) of the host image to insert the watermark.

To view the access options for this content please click here
Article

Khosrow Maleknejad, Saeed Sohrabi and Yasser Rostami

The purpose of this paper, with reference to compression of different images' portions with various qualities, is to obtain a high‐compression coefficient.

Abstract

Purpose

The purpose of this paper, with reference to compression of different images' portions with various qualities, is to obtain a high‐compression coefficient.

Design/methodology/approach

Usually, not all parts of a medical image have equal significance. Also, an image's background can be combined with noise. This method separates a part of the video which is moving from a part that is stationary.

Findings

This process results in the high‐quality compression of medical frames.

Originality/value

Separating parts of a frame using 2D and 3D wavelet transform makes a valuable contribution to biocybernetics.

Details

Kybernetes, vol. 37 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article

Huihuang Zhao, Jianzhen Chen, Shibiao Xu, Ying Wang and Zhijun Qiao

The purpose of this paper is to develop a compressive sensing (CS) algorithm for noisy solder joint imagery compression and recovery. A fast gradient-based compressive…

Abstract

Purpose

The purpose of this paper is to develop a compressive sensing (CS) algorithm for noisy solder joint imagery compression and recovery. A fast gradient-based compressive sensing (FGbCS) approach is proposed based on the convex optimization. The proposed algorithm is able to improve performance in terms of peak signal noise ratio (PSNR) and computational cost.

Design/methodology/approach

Unlike traditional CS methods, the authors first transformed a noise solder joint image to a sparse signal by a discrete cosine transform (DCT), so that the reconstruction of noisy solder joint imagery is changed to a convex optimization problem. Then, a so-called gradient-based method is utilized for solving the problem. To improve the method efficiency, the authors assume the problem to be convex with the Lipschitz gradient through the replacement of an iteration parameter by the Lipschitz constant. Moreover, a FGbCS algorithm is proposed to recover the noisy solder joint imagery under different parameters.

Findings

Experiments reveal that the proposed algorithm can achieve better results on PNSR with fewer computational costs than classical algorithms like Orthogonal Matching Pursuit (OMP), Greedy Basis Pursuit (GBP), Subspace Pursuit (SP), Compressive Sampling Matching Pursuit (CoSaMP) and Iterative Re-weighted Least Squares (IRLS). Convergence of the proposed algorithm is with a faster rate O(k*k) instead of O(1/k).

Practical implications

This paper provides a novel methodology for the CS of noisy solder joint imagery, and the proposed algorithm can also be used in other imagery compression and recovery.

Originality/value

According to the CS theory, a sparse or compressible signal can be represented by a fewer number of bases than those required by the Nyquist theorem. The new development might provide some fundamental guidelines for noisy imagery compression and recovering.

To view the access options for this content please click here
Book part

Li Xiao, Hye-jin Kim and Min Ding

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as…

Abstract

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as marketing scholars become more aware of the value of audio and visual data and the technologies required to reveal insights into marketing problems. This chapter aims to introduce marketing scholars into this field of research.Design/methodology/approach – This chapter reviews the current technology in audio and visual data analysis and discusses rewarding research opportunities in marketing using these data.Findings – Compared with traditional data like survey and scanner data, audio and visual data provides richer information and is easier to collect. Given these superiority, data availability, feasibility of storage, and increasing computational power, we believe that these data will contribute to better marketing practices with the help of marketing scholars in the near future.Practical implications: The adoption of audio and visual data in marketing practices will help practitioners to get better insights into marketing problems and thus make better decisions.Value/originality – This chapter makes first attempt in the marketing literature to review the current technology in audio and visual data analysis and proposes promising applications of such technology. We hope it will inspire scholars to utilize audio and visual data in marketing research.

Details

Review of Marketing Research
Type: Book
ISBN: 978-1-78190-761-0

Keywords

1 – 10 of over 2000