Search results

1 – 10 of 582
Article
Publication date: 19 June 2017

Qi Wang, Pengcheng Zhang, Jianming Wang, Qingliang Chen, Zhijie Lian, Xiuyan Li, Yukuan Sun, Xiaojie Duan, Ziqiang Cui, Benyuan Sun and Huaxiang Wang

Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the…

Abstract

Purpose

Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the resulting changes in voltage. Image reconstruction for EIT is a nonlinear problem. A generalized inverse operator is usually ill-posed and ill-conditioned. Therefore, the solutions for EIT are not unique and highly sensitive to the measurement noise.

Design/methodology/approach

This paper develops a novel image reconstruction algorithm for EIT based on patch-based sparse representation. The sparsifying dictionary optimization and image reconstruction are performed alternately. Two patch-based sparsity, namely, square-patch sparsity and column-patch sparsity, are discussed and compared with the global sparsity.

Findings

Both simulation and experimental results indicate that the patch based sparsity method can improve the quality of image reconstruction and tolerate a relatively high level of noise in the measured voltages.

Originality/value

EIT image is reconstructed based on patch-based sparse representation. Square-patch sparsity and column-patch sparsity are proposed and compared. Sparse dictionary optimization and image reconstruction are performed alternately. The new method tolerates a relatively high level of noise in measured voltages.

Details

Sensor Review, vol. 37 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 15 April 2020

ZiJian Tian, XiaoWei Gong, FangYuan He, JiaLuan He and XuQi Wang

To solve the problem that the traditional received signal strength indicator real-time location method does not test the attenuation characteristics of the electromagnetic wave…

Abstract

Purpose

To solve the problem that the traditional received signal strength indicator real-time location method does not test the attenuation characteristics of the electromagnetic wave transmission in the location area, which cannot guarantee the accuracy of the location, resulting in a large location error.

Design/methodology/approach

At present, the compressed sensing (CS) reconstruction algorithm can be roughly divided into the following two categories (Zhouzhou and Fubao, 2014; Lagunas et al., 2016): one is the greedy iterative algorithm proposed for combinatorial optimization problems, which includes matching pursuit algorithm (MP), positive cross matching tracking algorithm (OMP), greedy matching tracking algorithm, segmented orthogonal matching tracking algorithm (StOMP) and so on. The second kind is the convex optimization algorithm, which also called the optimization approximation method. The common method is the basic tracking algorithm, which uses the norm instead of the norm to solve the optimization problem. In this paper, based on the piecewise orthogonal MP algorithm, the improved StOMP reconstruction algorithm is obtained.

Findings

In this paper, the MP algorithm (OMP), the StOMP and the improved StOMP algorithm are used as simulation reconstruction algorithms to achieve the comparison of location performance. It can be seen that the estimated position of the target is very close to the original position of the target. It is concluded that the CS grid-based target stepwise location method in underground tunnel can accurately locate the target in such specific region.

Originality/value

In this paper, the offline fingerprint database in offline phase of location method is established and the measurement of the electromagnetic noise distribution in different localization areas is considered. Furthermore, the offline phase shares the work of the location process, which greatly reduces the algorithm complexity of the online phase location process and the power consumption of the reference node, meanwhile is easy to implement under the same conditions, as well as conforms to the location environment.

Details

Sensor Review, vol. 40 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 3 August 2020

Abdellatif Moudafi

The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m

Abstract

The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for mIN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 21 April 2020

Bo Li, Jian ming Wang, Qi Wang, Xiu yan Li and Xiaojie Duan

The purpose of this paper is to explore gas/liquid two-phase flow is widely existed in industrial fields, especially in chemical engineering. Electrical resistance tomography…

Abstract

Purpose

The purpose of this paper is to explore gas/liquid two-phase flow is widely existed in industrial fields, especially in chemical engineering. Electrical resistance tomography (ERT) is considered to be one of the most promising techniques to monitor the transient flow process because of its advantages such as fast respond speed and cross-section imaging. However, maintaining high resolution in space together with low cost is still challenging for two-phase flow imaging because of the ill-conditioning of ERT inverse problem.

Design/methodology/approach

In this paper, a sparse reconstruction (SR) method based on the learned dictionary has been proposed for ERT, to accurately monitor the transient flow process of gas/liquid two-phase flow in a pipeline. The high-level representation of the conductivity distributions for typical flow regimes can be extracted based on denoising the deep extreme learning machine (DDELM) model, which is used as prior information for dictionary learning.

Findings

The results from simulation and dynamic experiments indicate that the proposed algorithm efficiently improves the quality of reconstructed images as compared to some typical algorithms such as Landweber and SR-discrete fourier transformation/discrete cosine transformation. Furthermore, the SR-DDELM has also used to estimate the important parameters of the chemical process, a case in point is the volume flow rate. Therefore, the SR-DDELM is considered an ideal candidate for online monitor the gas/liquid two-phase flow.

Originality/value

This paper fulfills a novel approach to effectively monitor the gas/liquid two-phase flow in pipelines. One deep learning model and one adaptive dictionary are trained via the same prior conductivity, respectively. The model is used to extract high-level representation. The dictionary is used to represent the features of the flow process. SR and extraction of high-level representation are performed iteratively. The new method can obviously improve the monitoring accuracy and save calculation time.

Details

Sensor Review, vol. 40 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 4 April 2016

Huihuang Zhao, Jianzhen Chen, Shibiao Xu, Ying Wang and Zhijun Qiao

The purpose of this paper is to develop a compressive sensing (CS) algorithm for noisy solder joint imagery compression and recovery. A fast gradient-based compressive sensing…

Abstract

Purpose

The purpose of this paper is to develop a compressive sensing (CS) algorithm for noisy solder joint imagery compression and recovery. A fast gradient-based compressive sensing (FGbCS) approach is proposed based on the convex optimization. The proposed algorithm is able to improve performance in terms of peak signal noise ratio (PSNR) and computational cost.

Design/methodology/approach

Unlike traditional CS methods, the authors first transformed a noise solder joint image to a sparse signal by a discrete cosine transform (DCT), so that the reconstruction of noisy solder joint imagery is changed to a convex optimization problem. Then, a so-called gradient-based method is utilized for solving the problem. To improve the method efficiency, the authors assume the problem to be convex with the Lipschitz gradient through the replacement of an iteration parameter by the Lipschitz constant. Moreover, a FGbCS algorithm is proposed to recover the noisy solder joint imagery under different parameters.

Findings

Experiments reveal that the proposed algorithm can achieve better results on PNSR with fewer computational costs than classical algorithms like Orthogonal Matching Pursuit (OMP), Greedy Basis Pursuit (GBP), Subspace Pursuit (SP), Compressive Sampling Matching Pursuit (CoSaMP) and Iterative Re-weighted Least Squares (IRLS). Convergence of the proposed algorithm is with a faster rate O(k*k) instead of O(1/k).

Practical implications

This paper provides a novel methodology for the CS of noisy solder joint imagery, and the proposed algorithm can also be used in other imagery compression and recovery.

Originality/value

According to the CS theory, a sparse or compressible signal can be represented by a fewer number of bases than those required by the Nyquist theorem. The new development might provide some fundamental guidelines for noisy imagery compression and recovering.

Article
Publication date: 8 January 2021

Ashok Naganath Shinde, Sanjay L. Nalbalwar and Anil B. Nandgaonkar

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG)…

Abstract

Purpose

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG), electromyogram and electroencephalogram (EEG) are produced in human body. This continuous monitoring generates huge count of data and thus an efficient method is required to shrink the size of the obtained large data. Compressed sensing (CS) is one of the techniques used to compress the data size. This technique is most used in certain applications, where the size of data is huge or the data acquisition process is too expensive to gather data from vast count of samples at Nyquist rate. This paper aims to propose Lion Mutated Crow search Algorithm (LM-CSA), to improve the performance of the LMCSA model.

Design/methodology/approach

A new CS algorithm is exploited in this paper, where the compression process undergoes three stages: designing of stable measurement matrix, signal compression and signal reconstruction. Here, the compression process falls under certain working principle, and is as follows: signal transformation, computation of Θ and normalization. As the main contribution, the theta value evaluation is proceeded by a new “Enhanced bi-orthogonal wavelet filter.” The enhancement is given under the scaling coefficients, where they are optimally tuned for processing the compression. However, the way of tuning seems to be the great crisis, and hence this work seeks the strategy of meta-heuristic algorithms. Moreover, a new hybrid algorithm is introduced that solves the above mentioned optimization inconsistency. The proposed algorithm is named as “Lion Mutated Crow search Algorithm (LM-CSA),” which is the hybridization of crow search algorithm (CSA) and lion algorithm (LA) to enhance the performance of the LM-CSA model.

Findings

Finally, the proposed LM-CSA model is compared over the traditional models in terms of certain error measures such as mean error percentage (MEP), symmetric mean absolute percentage error (SMAPE), mean absolute scaled error, mean absolute error (MAE), root mean square error, L1-norm and L2-normand infinity-norm. For ECG analysis, under bior 3.1, LM-CSA is 56.6, 62.5 and 81.5% better than bi-orthogonal wavelet in terms of MEP, SMAPE and MAE, respectively. Under bior 3.7 for ECG analysis, LM-CSA is 0.15% better than genetic algorithm (GA), 0.10% superior to particle search optimization (PSO), 0.22% superior to firefly (FF), 0.22% superior to CSA and 0.14% superior to LA, respectively, in terms of L1-norm. Further, for EEG analysis, LM-CSA is 86.9 and 91.2% better than the traditional bi-orthogonal wavelet under bior 3.1. Under bior 3.3, LM-CSA is 91.7 and 73.12% better than the bi-orthogonal wavelet in terms of MAE and MEP, respectively. Under bior 3.5 for EEG, L1-norm of LM-CSA is 0.64% superior to GA, 0.43% superior to PSO, 0.62% superior to FF, 0.84% superior to CSA and 0.60% better than LA, respectively.

Originality/value

This paper presents a novel CS framework using LM-CSA algorithm for EEG and ECG signal compression. To the best of the authors’ knowledge, this is the first work to use LM-CSA with enhanced bi-orthogonal wavelet filter for enhancing the CS capability as well reducing the errors.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 January 2014

Xiaoyan Zhuang, Yijiu Zhao, Li Wang and Houjun Wang

The purpose of this paper is to present a compressed sensing (CS)-based sampling system for ultra-wide-band (UWB) signal. By exploiting the sparsity of signal, this new sampling…

Abstract

Purpose

The purpose of this paper is to present a compressed sensing (CS)-based sampling system for ultra-wide-band (UWB) signal. By exploiting the sparsity of signal, this new sampling system can sub-Nyquist sample a multiband UWB signal, whose unknown frequency support occupies only a small portion of a wide spectrum.

Design/methodology/approach

A random Rademacher sequence is used to sense the signal in the frequency domain, and a matrix constructed by Hadamard basis is used to compress the signal. The probability of reconstruction is proved mathematically, and the reconstruction matrix is developed in the frequency domain.

Findings

Simulation results indicate that, with an ultra-low sampling rate, the proposed system can capture and reconstruct sparse multiband UWB signals with high probability. For sparse multiband UWB signals, the proposed system has potential to break through the Shannon theorem.

Originality/value

Different from the traditional sub-Nyquist techniques, the proposed sampling system not only breaks through the limitation of Shannon theorem but also avoids the barrier of input bandwidth of analog-to-digital converters (ADCs).

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 33 no. 1/2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 27 May 2014

Huihuang Zhao, Yaonan Wang, Zhijun Qiao and Bin Fu

The purpose of this paper is to develop an improved compressive sensing algorithm for solder joint imagery compressing and recovery. The improved algorithm can improve the…

Abstract

Purpose

The purpose of this paper is to develop an improved compressive sensing algorithm for solder joint imagery compressing and recovery. The improved algorithm can improve the performance in terms of peak signal to noise ratio (PSNR) of solder joint imagery recovery.

Design/methodology/approach

Unlike the traditional method, at first, the image was transformed into a sparse signal by discrete cosine transform; then the solder joint image was divided into blocks, and each image block was transformed into a one-dimensional data vector. At last, a block compressive sampling matching pursuit was proposed, and the proposed algorithm with different block sizes was used in recovering the solder joint imagery.

Findings

The experiments showed that the proposed algorithm could achieve the best results on PSNR when compared to other methods such as the orthogonal matching pursuit algorithm, greedy basis pursuit algorithm, subspace pursuit algorithm and compressive sampling matching pursuit algorithm. When the block size was 16 × 16, the proposed algorithm could obtain better results than when the block size was 8 × 8 and 4 × 4.

Practical implications

The paper provides a methodology for solder joint imagery compressing and recovery, and the proposed algorithm can also be used in other image compressing and recovery applications.

Originality/value

According to the compressed sensing (CS) theory, a sparse or compressible signal can be represented by a fewer number of bases than those required by the Nyquist theorem. The findings provide fundamental guidelines to improve performance in image compressing and recovery based on compressive sensing.

Details

Soldering & Surface Mount Technology, vol. 26 no. 3
Type: Research Article
ISSN: 0954-0911

Keywords

Article
Publication date: 5 June 2017

Zhoufeng Liu, Lei Yan, Chunlei Li, Yan Dong and Guangshuai Gao

The purpose of this paper is to find an efficient fabric defect detection algorithm by means of exploring the sparsity characteristics of main local binary pattern (MLBP…

Abstract

Purpose

The purpose of this paper is to find an efficient fabric defect detection algorithm by means of exploring the sparsity characteristics of main local binary pattern (MLBP) extracted from the original fabric texture.

Design/methodology/approach

In the proposed algorithm, original LBP features are extracted from the fabric texture to be detected, and MLBP are selected by occurrence probability. Second, a dictionary is established with MLBP atoms which can sparsely represent all the LBP. Then, the value of the gray-scale difference between gray level of neighborhood pixels and the central pixel, and the mean of the difference which has the same MLBP feature are calculated. And then, the defect-contained image is reconstructed as normal texture image. Finally, the residual is calculated between reconstructed and original images, and a simple threshold segmentation method can divide the residual image, and the defective region is detected.

Findings

The experiment result shows that the fabric texture can be more efficiently reconstructed, and the proposed method achieves better defect detection performance. Moreover, it offers empirical insights about how to exploit the sparsity of one certain feature, e.g. LBP.

Research limitations/implications

Because of the selected research approach, the results may lack generalizability in chambray. Therefore, researchers are encouraged to test the proposed propositions further.

Originality/value

In this paper, a novel fabric defect detection method which extracts the sparsity of MLBP features is proposed.

Details

International Journal of Clothing Science and Technology, vol. 29 no. 3
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 9 August 2021

Hrishikesh B Vanjari and Mahesh T Kolte

Speech is the primary means of communication for humans. A proper functioning auditory system is needed for accurate cognition of speech. Compressed sensing (CS) is a method for…

78

Abstract

Purpose

Speech is the primary means of communication for humans. A proper functioning auditory system is needed for accurate cognition of speech. Compressed sensing (CS) is a method for simultaneous compression and sampling of a given signal. It is a novel method increasingly being used in many speech processing applications. The paper aims to use Compressive sensing algorithm for hearing aid applications to reduce surrounding noise.

Design/methodology/approach

In this work, the authors propose a machine learning algorithm for improving the performance of compressive sensing using a neural network.

Findings

The proposed solution is able to reduce the signal reconstruction time by about 21.62% and root mean square error of 43% compared to default L2 norm minimization used in CS reconstruction. This work proposes an adaptive neural network–based algorithm to enhance the compressive sensing so that it is able to reconstruct the signal in a comparatively lower time and with minimal distortion to the quality.

Research limitations/implications

The use of compressive sensing for speech enhancement in a hearing aid is limited due to the delay in the reconstruction of the signal.

Practical implications

In many digital applications, the acquired raw signals are compressed to achieve smaller size so that it becomes effective for storage and transmission. In this process, even unnecessary signals are acquired and compressed leading to inefficiency.

Social implications

Hearing loss is the most common sensory deficit in humans today. Worldwide, it is the second leading cause for “Years lived with Disability” the first being depression. A recent study by World health organization estimates nearly 450 million people in the world had been disabled by hearing loss, and the prevalence of hearing impairment in India is around 6.3% (63 million people suffering from significant auditory loss).

Originality/value

The objective is to reduce the time taken for CS reconstruction with minimal degradation to the reconstructed signal. Also, the solution must be adaptive to different characteristics of the signal and in presence of different types of noises.

Details

World Journal of Engineering, vol. 19 no. 2
Type: Research Article
ISSN: 1708-5284

Keywords

1 – 10 of 582