Search results

1 – 10 of 235
Article
Publication date: 27 May 2014

Huihuang Zhao, Yaonan Wang, Zhijun Qiao and Bin Fu

The purpose of this paper is to develop an improved compressive sensing algorithm for solder joint imagery compressing and recovery. The improved algorithm can improve the…

Abstract

Purpose

The purpose of this paper is to develop an improved compressive sensing algorithm for solder joint imagery compressing and recovery. The improved algorithm can improve the performance in terms of peak signal to noise ratio (PSNR) of solder joint imagery recovery.

Design/methodology/approach

Unlike the traditional method, at first, the image was transformed into a sparse signal by discrete cosine transform; then the solder joint image was divided into blocks, and each image block was transformed into a one-dimensional data vector. At last, a block compressive sampling matching pursuit was proposed, and the proposed algorithm with different block sizes was used in recovering the solder joint imagery.

Findings

The experiments showed that the proposed algorithm could achieve the best results on PSNR when compared to other methods such as the orthogonal matching pursuit algorithm, greedy basis pursuit algorithm, subspace pursuit algorithm and compressive sampling matching pursuit algorithm. When the block size was 16 × 16, the proposed algorithm could obtain better results than when the block size was 8 × 8 and 4 × 4.

Practical implications

The paper provides a methodology for solder joint imagery compressing and recovery, and the proposed algorithm can also be used in other image compressing and recovery applications.

Originality/value

According to the compressed sensing (CS) theory, a sparse or compressible signal can be represented by a fewer number of bases than those required by the Nyquist theorem. The findings provide fundamental guidelines to improve performance in image compressing and recovery based on compressive sensing.

Details

Soldering & Surface Mount Technology, vol. 26 no. 3
Type: Research Article
ISSN: 0954-0911

Keywords

Article
Publication date: 4 April 2016

Huihuang Zhao, Jianzhen Chen, Shibiao Xu, Ying Wang and Zhijun Qiao

The purpose of this paper is to develop a compressive sensing (CS) algorithm for noisy solder joint imagery compression and recovery. A fast gradient-based compressive sensing…

Abstract

Purpose

The purpose of this paper is to develop a compressive sensing (CS) algorithm for noisy solder joint imagery compression and recovery. A fast gradient-based compressive sensing (FGbCS) approach is proposed based on the convex optimization. The proposed algorithm is able to improve performance in terms of peak signal noise ratio (PSNR) and computational cost.

Design/methodology/approach

Unlike traditional CS methods, the authors first transformed a noise solder joint image to a sparse signal by a discrete cosine transform (DCT), so that the reconstruction of noisy solder joint imagery is changed to a convex optimization problem. Then, a so-called gradient-based method is utilized for solving the problem. To improve the method efficiency, the authors assume the problem to be convex with the Lipschitz gradient through the replacement of an iteration parameter by the Lipschitz constant. Moreover, a FGbCS algorithm is proposed to recover the noisy solder joint imagery under different parameters.

Findings

Experiments reveal that the proposed algorithm can achieve better results on PNSR with fewer computational costs than classical algorithms like Orthogonal Matching Pursuit (OMP), Greedy Basis Pursuit (GBP), Subspace Pursuit (SP), Compressive Sampling Matching Pursuit (CoSaMP) and Iterative Re-weighted Least Squares (IRLS). Convergence of the proposed algorithm is with a faster rate O(k*k) instead of O(1/k).

Practical implications

This paper provides a novel methodology for the CS of noisy solder joint imagery, and the proposed algorithm can also be used in other imagery compression and recovery.

Originality/value

According to the CS theory, a sparse or compressible signal can be represented by a fewer number of bases than those required by the Nyquist theorem. The new development might provide some fundamental guidelines for noisy imagery compression and recovering.

Article
Publication date: 23 August 2019

Shenlong Wang, Kaixin Han and Jiafeng Jin

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of…

Abstract

Purpose

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of feature extraction is used in two cases: application-based feature expression and mathematical approaches for dimensionality reduction. Feature expression is a technique of describing the image color, texture and shape information with feature descriptors; thus, obtaining effective image features expression is the key to extracting high-level semantic information. However, most of the previous studies regarding image feature extraction and expression methods in the CBIR have not performed systematic research. This paper aims to introduce the basic image low-level feature expression techniques for color, texture and shape features that have been developed in recent years.

Design/methodology/approach

First, this review outlines the development process and expounds the principle of various image feature extraction methods, such as color, texture and shape feature expression. Second, some of the most commonly used image low-level expression algorithms are implemented, and the benefits and drawbacks are summarized. Third, the effectiveness of the global and local features in image retrieval, including some classical models and their illustrations provided by part of our experiment, are analyzed. Fourth, the sparse representation and similarity measurement methods are introduced, and the retrieval performance of statistical methods is evaluated and compared.

Findings

The core of this survey is to review the state of the image low-level expression methods and study the pros and cons of each method, their applicable occasions and certain implementation measures. This review notes that image peculiarities of single-feature descriptions may lead to unsatisfactory image retrieval capabilities, which have significant singularity and considerable limitations and challenges in the CBIR.

Originality/value

A comprehensive review of the latest developments in image retrieval using low-level feature expression techniques is provided in this paper. This review not only introduces the major approaches for image low-level feature expression but also supplies a pertinent reference for those engaging in research regarding image feature extraction.

Article
Publication date: 8 January 2021

Ashok Naganath Shinde, Sanjay L. Nalbalwar and Anil B. Nandgaonkar

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG)…

Abstract

Purpose

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG), electromyogram and electroencephalogram (EEG) are produced in human body. This continuous monitoring generates huge count of data and thus an efficient method is required to shrink the size of the obtained large data. Compressed sensing (CS) is one of the techniques used to compress the data size. This technique is most used in certain applications, where the size of data is huge or the data acquisition process is too expensive to gather data from vast count of samples at Nyquist rate. This paper aims to propose Lion Mutated Crow search Algorithm (LM-CSA), to improve the performance of the LMCSA model.

Design/methodology/approach

A new CS algorithm is exploited in this paper, where the compression process undergoes three stages: designing of stable measurement matrix, signal compression and signal reconstruction. Here, the compression process falls under certain working principle, and is as follows: signal transformation, computation of Θ and normalization. As the main contribution, the theta value evaluation is proceeded by a new “Enhanced bi-orthogonal wavelet filter.” The enhancement is given under the scaling coefficients, where they are optimally tuned for processing the compression. However, the way of tuning seems to be the great crisis, and hence this work seeks the strategy of meta-heuristic algorithms. Moreover, a new hybrid algorithm is introduced that solves the above mentioned optimization inconsistency. The proposed algorithm is named as “Lion Mutated Crow search Algorithm (LM-CSA),” which is the hybridization of crow search algorithm (CSA) and lion algorithm (LA) to enhance the performance of the LM-CSA model.

Findings

Finally, the proposed LM-CSA model is compared over the traditional models in terms of certain error measures such as mean error percentage (MEP), symmetric mean absolute percentage error (SMAPE), mean absolute scaled error, mean absolute error (MAE), root mean square error, L1-norm and L2-normand infinity-norm. For ECG analysis, under bior 3.1, LM-CSA is 56.6, 62.5 and 81.5% better than bi-orthogonal wavelet in terms of MEP, SMAPE and MAE, respectively. Under bior 3.7 for ECG analysis, LM-CSA is 0.15% better than genetic algorithm (GA), 0.10% superior to particle search optimization (PSO), 0.22% superior to firefly (FF), 0.22% superior to CSA and 0.14% superior to LA, respectively, in terms of L1-norm. Further, for EEG analysis, LM-CSA is 86.9 and 91.2% better than the traditional bi-orthogonal wavelet under bior 3.1. Under bior 3.3, LM-CSA is 91.7 and 73.12% better than the bi-orthogonal wavelet in terms of MAE and MEP, respectively. Under bior 3.5 for EEG, L1-norm of LM-CSA is 0.64% superior to GA, 0.43% superior to PSO, 0.62% superior to FF, 0.84% superior to CSA and 0.60% better than LA, respectively.

Originality/value

This paper presents a novel CS framework using LM-CSA algorithm for EEG and ECG signal compression. To the best of the authors’ knowledge, this is the first work to use LM-CSA with enhanced bi-orthogonal wavelet filter for enhancing the CS capability as well reducing the errors.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 9 August 2021

Hrishikesh B Vanjari and Mahesh T Kolte

Speech is the primary means of communication for humans. A proper functioning auditory system is needed for accurate cognition of speech. Compressed sensing (CS) is a method for…

78

Abstract

Purpose

Speech is the primary means of communication for humans. A proper functioning auditory system is needed for accurate cognition of speech. Compressed sensing (CS) is a method for simultaneous compression and sampling of a given signal. It is a novel method increasingly being used in many speech processing applications. The paper aims to use Compressive sensing algorithm for hearing aid applications to reduce surrounding noise.

Design/methodology/approach

In this work, the authors propose a machine learning algorithm for improving the performance of compressive sensing using a neural network.

Findings

The proposed solution is able to reduce the signal reconstruction time by about 21.62% and root mean square error of 43% compared to default L2 norm minimization used in CS reconstruction. This work proposes an adaptive neural network–based algorithm to enhance the compressive sensing so that it is able to reconstruct the signal in a comparatively lower time and with minimal distortion to the quality.

Research limitations/implications

The use of compressive sensing for speech enhancement in a hearing aid is limited due to the delay in the reconstruction of the signal.

Practical implications

In many digital applications, the acquired raw signals are compressed to achieve smaller size so that it becomes effective for storage and transmission. In this process, even unnecessary signals are acquired and compressed leading to inefficiency.

Social implications

Hearing loss is the most common sensory deficit in humans today. Worldwide, it is the second leading cause for “Years lived with Disability” the first being depression. A recent study by World health organization estimates nearly 450 million people in the world had been disabled by hearing loss, and the prevalence of hearing impairment in India is around 6.3% (63 million people suffering from significant auditory loss).

Originality/value

The objective is to reduce the time taken for CS reconstruction with minimal degradation to the reconstructed signal. Also, the solution must be adaptive to different characteristics of the signal and in presence of different types of noises.

Details

World Journal of Engineering, vol. 19 no. 2
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 29 January 2021

Junying Chen, Zhanshe Guo, Fuqiang Zhou, Jiangwen Wan and Donghao Wang

As the limited energy of wireless sensor networks (WSNs), energy-efficient data-gathering algorithms are required. This paper proposes a compressive data-gathering algorithm based…

Abstract

Purpose

As the limited energy of wireless sensor networks (WSNs), energy-efficient data-gathering algorithms are required. This paper proposes a compressive data-gathering algorithm based on double sparse structure dictionary learning (DSSDL). The purpose of this paper is to reduce the energy consumption of WSNs.

Design/methodology/approach

The historical data is used to construct a sparse representation base. In the dictionary-learning stage, the sparse representation matrix is decomposed into the product of double sparse matrices. Then, in the update stage of the dictionary, the sparse representation matrix is orthogonalized and unitized. The finally obtained double sparse structure dictionary is applied to the compressive data gathering in WSNs.

Findings

The dictionary obtained by the proposed algorithm has better sparse representation ability. The experimental results show that, the sparse representation error can be reduced by at least 3.6% compared with other dictionaries. In addition, the better sparse representation ability makes the WSNs achieve less measurement times under the same accuracy of data gathering, which means more energy saving. According to the results of simulation, the proposed algorithm can reduce the energy consumption by at least 2.7% compared with other compressive data-gathering methods under the same data-gathering accuracy.

Originality/value

In this paper, the double sparse structure dictionary is introduced into the compressive data-gathering algorithm in WSNs. The experimental results indicate that the proposed algorithm has good performance on energy consumption and sparse representation.

Details

Sensor Review, vol. 41 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 22 June 2023

Ignacio Manuel Luque Raya and Pablo Luque Raya

Having defined liquidity, the aim is to assess the predictive capacity of its representative variables, so that economic fluctuations may be better understood.

Abstract

Purpose

Having defined liquidity, the aim is to assess the predictive capacity of its representative variables, so that economic fluctuations may be better understood.

Design/methodology/approach

Conceptual variables that are representative of liquidity will be used to formulate the predictions. The results of various machine learning models will be compared, leading to some reflections on the predictive value of the liquidity variables, with a view to defining their selection.

Findings

The predictive capacity of the model was also found to vary depending on the source of the liquidity, in so far as the data on liquidity within the private sector contributed more than the data on public sector liquidity to the prediction of economic fluctuations. International liquidity was seen as a more diffuse concept, and the standardization of its definition could be the focus of future studies. A benchmarking process was also performed when applying the state-of-the-art machine learning models.

Originality/value

Better understanding of these variables might help us toward a deeper understanding of the operation of financial markets. Liquidity, one of the key financial market variables, is neither well-defined nor standardized in the existing literature, which calls for further study. Hence, the novelty of an applied study employing modern data science techniques can provide a fresh perspective on financial markets.

流動資金,無論是在金融市場方面,抑或是在實體經濟方面,均為市場趨勢最明確的預報因素之一

因此,就了解經濟週期和經濟發展而言,流動資金是一個極其重要的概念。本研究擬在安全資產的價格預測方面取得進步。安全資產代表了經濟的實際情況,特別是美國的十年期國債。

研究目的

流動資金的定義上面已說明了; 為進一步了解經濟波動,本研究擬對流動資金代表性變量的預測能力進行評估。

研究方法

研究使用作為流動資金代表的概念變項去規劃預測。各機器學習模型的結果會作比較,這會帶來對流動資金變量的預測值的深思,而深思的目的是確定其選擇。

研究結果

只要在私營部門內流動資金的數據比公營部門的流動資金數據、在預測經濟波動方面貢獻更大時,我們發現、模型的預測能力也會依賴流動資金的來源而存在差異。國際流動資金被視為一個晦澀的概念,而它的定義的標準化,或許應是未來學術研究的焦點。當應用最先進的機器學習模型時,標桿分析法的步驟也施行了。

研究的原創性

若我們對有關的變量加深認識,我們就可更深入地理解金融市場的運作。流動資金,雖是金融市場中一個極其重要的變量,但在現存的學術文獻裏,不但沒有明確的定義,而且也沒有被標準化; 就此而言,未來的研究或許可在這方面作進一步的探討。因此,本研究為富有新穎思維的應用研究,研究使用了現代數據科學技術,這可為探討金融市場提供一個全新的視角。

Details

European Journal of Management and Business Economics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2444-8451

Keywords

Article
Publication date: 21 April 2020

Bo Li, Jian ming Wang, Qi Wang, Xiu yan Li and Xiaojie Duan

The purpose of this paper is to explore gas/liquid two-phase flow is widely existed in industrial fields, especially in chemical engineering. Electrical resistance tomography…

Abstract

Purpose

The purpose of this paper is to explore gas/liquid two-phase flow is widely existed in industrial fields, especially in chemical engineering. Electrical resistance tomography (ERT) is considered to be one of the most promising techniques to monitor the transient flow process because of its advantages such as fast respond speed and cross-section imaging. However, maintaining high resolution in space together with low cost is still challenging for two-phase flow imaging because of the ill-conditioning of ERT inverse problem.

Design/methodology/approach

In this paper, a sparse reconstruction (SR) method based on the learned dictionary has been proposed for ERT, to accurately monitor the transient flow process of gas/liquid two-phase flow in a pipeline. The high-level representation of the conductivity distributions for typical flow regimes can be extracted based on denoising the deep extreme learning machine (DDELM) model, which is used as prior information for dictionary learning.

Findings

The results from simulation and dynamic experiments indicate that the proposed algorithm efficiently improves the quality of reconstructed images as compared to some typical algorithms such as Landweber and SR-discrete fourier transformation/discrete cosine transformation. Furthermore, the SR-DDELM has also used to estimate the important parameters of the chemical process, a case in point is the volume flow rate. Therefore, the SR-DDELM is considered an ideal candidate for online monitor the gas/liquid two-phase flow.

Originality/value

This paper fulfills a novel approach to effectively monitor the gas/liquid two-phase flow in pipelines. One deep learning model and one adaptive dictionary are trained via the same prior conductivity, respectively. The model is used to extract high-level representation. The dictionary is used to represent the features of the flow process. SR and extraction of high-level representation are performed iteratively. The new method can obviously improve the monitoring accuracy and save calculation time.

Details

Sensor Review, vol. 40 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 7 November 2016

Zhen Ma, Degan Zhang, Si Liu, Jinjie Song and Yuexian Hou

The performance of the measurement matrix directly affects the quality of reconstruction of compressive sensing signal, and it is also the key to solve practical problems. In…

Abstract

Purpose

The performance of the measurement matrix directly affects the quality of reconstruction of compressive sensing signal, and it is also the key to solve practical problems. In order to solve data collection problem of wireless sensor network (WSN), the authors design a kind of optimization of sparse matrix. The paper aims to discuss these issues.

Design/methodology/approach

Based on the sparse random matrix, it optimizes the seed vector, which regards elements in the diagonal matrix of Hadamard matrix after passing singular value decomposition (SVD). Compared with the Toeplitz matrix, it requires less number of independent random variables and the matrix information is more concentrated.

Findings

The performance of reconstruction is better than that of Gaussian random matrix. The authors also apply this matrix to the data collection scheme in WSN. The result shows that it costs less energy and reduces the collection frequency of nodes compared with general method.

Originality/value

The authors design a kind of optimization of sparse matrix. Based on the sparse random matrix, it optimizes the seed vector, which regards elements in the diagonal matrix of Hadamard matrix after passing SVD. Compared with the Toeplitz matrix, it requires less number of independent random variables and the matrix information is more concentrated.

Details

Engineering Computations, vol. 33 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 January 2014

Xiaoyan Zhuang, Yijiu Zhao, Li Wang and Houjun Wang

The purpose of this paper is to present a compressed sensing (CS)-based sampling system for ultra-wide-band (UWB) signal. By exploiting the sparsity of signal, this new sampling…

Abstract

Purpose

The purpose of this paper is to present a compressed sensing (CS)-based sampling system for ultra-wide-band (UWB) signal. By exploiting the sparsity of signal, this new sampling system can sub-Nyquist sample a multiband UWB signal, whose unknown frequency support occupies only a small portion of a wide spectrum.

Design/methodology/approach

A random Rademacher sequence is used to sense the signal in the frequency domain, and a matrix constructed by Hadamard basis is used to compress the signal. The probability of reconstruction is proved mathematically, and the reconstruction matrix is developed in the frequency domain.

Findings

Simulation results indicate that, with an ultra-low sampling rate, the proposed system can capture and reconstruct sparse multiband UWB signals with high probability. For sparse multiband UWB signals, the proposed system has potential to break through the Shannon theorem.

Originality/value

Different from the traditional sub-Nyquist techniques, the proposed sampling system not only breaks through the limitation of Shannon theorem but also avoids the barrier of input bandwidth of analog-to-digital converters (ADCs).

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 33 no. 1/2
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of 235