Search results

1 – 10 of 11
Article
Publication date: 9 August 2021

Hrishikesh B Vanjari and Mahesh T Kolte

Speech is the primary means of communication for humans. A proper functioning auditory system is needed for accurate cognition of speech. Compressed sensing (CS) is a method for…

78

Abstract

Purpose

Speech is the primary means of communication for humans. A proper functioning auditory system is needed for accurate cognition of speech. Compressed sensing (CS) is a method for simultaneous compression and sampling of a given signal. It is a novel method increasingly being used in many speech processing applications. The paper aims to use Compressive sensing algorithm for hearing aid applications to reduce surrounding noise.

Design/methodology/approach

In this work, the authors propose a machine learning algorithm for improving the performance of compressive sensing using a neural network.

Findings

The proposed solution is able to reduce the signal reconstruction time by about 21.62% and root mean square error of 43% compared to default L2 norm minimization used in CS reconstruction. This work proposes an adaptive neural network–based algorithm to enhance the compressive sensing so that it is able to reconstruct the signal in a comparatively lower time and with minimal distortion to the quality.

Research limitations/implications

The use of compressive sensing for speech enhancement in a hearing aid is limited due to the delay in the reconstruction of the signal.

Practical implications

In many digital applications, the acquired raw signals are compressed to achieve smaller size so that it becomes effective for storage and transmission. In this process, even unnecessary signals are acquired and compressed leading to inefficiency.

Social implications

Hearing loss is the most common sensory deficit in humans today. Worldwide, it is the second leading cause for “Years lived with Disability” the first being depression. A recent study by World health organization estimates nearly 450 million people in the world had been disabled by hearing loss, and the prevalence of hearing impairment in India is around 6.3% (63 million people suffering from significant auditory loss).

Originality/value

The objective is to reduce the time taken for CS reconstruction with minimal degradation to the reconstructed signal. Also, the solution must be adaptive to different characteristics of the signal and in presence of different types of noises.

Details

World Journal of Engineering, vol. 19 no. 2
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 4 August 2021

Chenglong Yu, Zhiqi Li, Dapeng Yang, Hong Liu and Alan F. Lynch

This study aims to propose a novel method based on model learning with sparsity inducing norms for estimating dynamic gravity terms of the serial manipulators. This method is…

188

Abstract

Purpose

This study aims to propose a novel method based on model learning with sparsity inducing norms for estimating dynamic gravity terms of the serial manipulators. This method is realized by operating the robot, acquiring data and filtering the features in signal acquisition to adapt to the dynamic gravity parameters.

Design/methodology/approach

The core principle of the method is to analyze the dictionary composition of the basis function of the model based on the dynamic equation and the Jacobian matrix of an arm. According to the structure of the basis function and the sparsity of the features, combined with joint-angle and driving-torque data acquisition, the effective features of dynamic gravity parameters are screened out using L1-norm optimization and learning algorithms.

Findings

The theoretical analysis revealed that training data obtained based on joint angles and driving torques could rapidly update dynamic gravity parameters. The simulation experiment was carried out by using the publicly available robot model and compared with the previous disassembly method to evaluate the feasibility and performance. The real 7-degree of freedom (DOF) industrial manipulator was used to further discuss the effects of the feature selection. The results show that this estimation method can be fully operational and efficient in industrial applications.

Research limitations/implications

This approach is applicable to most serial robots with multi-DOF and the dynamic gravity parameters of the robot are estimated through learning and optimization. The method does not require prior knowledge of the robot arm structure and only requires joint-angle and driving-torque data acquisition under low-speed motion. Furthermore, as it is a data-driven-based method, it can be applied to gravity parameters updating.

Originality/value

Different from previous general robot dynamic modelling methods, the sparsity of the analytical form of dynamic equations was exploited and model learning was formulated as a convex optimization problem to achieve effective gravity parameters screening. The novelty of this estimation approach is that the method does not only require any prior knowledge but also does not require a specifically designed trajectory. Thus, this method can avoid the laborious work of parameter calibration and the induced modelling errors. By using a data-driven learning approach, the new parameter updating process can be completed conveniently when the robot carries additional mass or the end-effector changes for different tasks.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 June 2017

Zhoufeng Liu, Lei Yan, Chunlei Li, Yan Dong and Guangshuai Gao

The purpose of this paper is to find an efficient fabric defect detection algorithm by means of exploring the sparsity characteristics of main local binary pattern (MLBP…

Abstract

Purpose

The purpose of this paper is to find an efficient fabric defect detection algorithm by means of exploring the sparsity characteristics of main local binary pattern (MLBP) extracted from the original fabric texture.

Design/methodology/approach

In the proposed algorithm, original LBP features are extracted from the fabric texture to be detected, and MLBP are selected by occurrence probability. Second, a dictionary is established with MLBP atoms which can sparsely represent all the LBP. Then, the value of the gray-scale difference between gray level of neighborhood pixels and the central pixel, and the mean of the difference which has the same MLBP feature are calculated. And then, the defect-contained image is reconstructed as normal texture image. Finally, the residual is calculated between reconstructed and original images, and a simple threshold segmentation method can divide the residual image, and the defective region is detected.

Findings

The experiment result shows that the fabric texture can be more efficiently reconstructed, and the proposed method achieves better defect detection performance. Moreover, it offers empirical insights about how to exploit the sparsity of one certain feature, e.g. LBP.

Research limitations/implications

Because of the selected research approach, the results may lack generalizability in chambray. Therefore, researchers are encouraged to test the proposed propositions further.

Originality/value

In this paper, a novel fabric defect detection method which extracts the sparsity of MLBP features is proposed.

Details

International Journal of Clothing Science and Technology, vol. 29 no. 3
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 8 January 2021

Ashok Naganath Shinde, Sanjay L. Nalbalwar and Anil B. Nandgaonkar

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG)…

Abstract

Purpose

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG), electromyogram and electroencephalogram (EEG) are produced in human body. This continuous monitoring generates huge count of data and thus an efficient method is required to shrink the size of the obtained large data. Compressed sensing (CS) is one of the techniques used to compress the data size. This technique is most used in certain applications, where the size of data is huge or the data acquisition process is too expensive to gather data from vast count of samples at Nyquist rate. This paper aims to propose Lion Mutated Crow search Algorithm (LM-CSA), to improve the performance of the LMCSA model.

Design/methodology/approach

A new CS algorithm is exploited in this paper, where the compression process undergoes three stages: designing of stable measurement matrix, signal compression and signal reconstruction. Here, the compression process falls under certain working principle, and is as follows: signal transformation, computation of Θ and normalization. As the main contribution, the theta value evaluation is proceeded by a new “Enhanced bi-orthogonal wavelet filter.” The enhancement is given under the scaling coefficients, where they are optimally tuned for processing the compression. However, the way of tuning seems to be the great crisis, and hence this work seeks the strategy of meta-heuristic algorithms. Moreover, a new hybrid algorithm is introduced that solves the above mentioned optimization inconsistency. The proposed algorithm is named as “Lion Mutated Crow search Algorithm (LM-CSA),” which is the hybridization of crow search algorithm (CSA) and lion algorithm (LA) to enhance the performance of the LM-CSA model.

Findings

Finally, the proposed LM-CSA model is compared over the traditional models in terms of certain error measures such as mean error percentage (MEP), symmetric mean absolute percentage error (SMAPE), mean absolute scaled error, mean absolute error (MAE), root mean square error, L1-norm and L2-normand infinity-norm. For ECG analysis, under bior 3.1, LM-CSA is 56.6, 62.5 and 81.5% better than bi-orthogonal wavelet in terms of MEP, SMAPE and MAE, respectively. Under bior 3.7 for ECG analysis, LM-CSA is 0.15% better than genetic algorithm (GA), 0.10% superior to particle search optimization (PSO), 0.22% superior to firefly (FF), 0.22% superior to CSA and 0.14% superior to LA, respectively, in terms of L1-norm. Further, for EEG analysis, LM-CSA is 86.9 and 91.2% better than the traditional bi-orthogonal wavelet under bior 3.1. Under bior 3.3, LM-CSA is 91.7 and 73.12% better than the bi-orthogonal wavelet in terms of MAE and MEP, respectively. Under bior 3.5 for EEG, L1-norm of LM-CSA is 0.64% superior to GA, 0.43% superior to PSO, 0.62% superior to FF, 0.84% superior to CSA and 0.60% better than LA, respectively.

Originality/value

This paper presents a novel CS framework using LM-CSA algorithm for EEG and ECG signal compression. To the best of the authors’ knowledge, this is the first work to use LM-CSA with enhanced bi-orthogonal wavelet filter for enhancing the CS capability as well reducing the errors.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 18 October 2019

A. Kullaya Swamy and Sarojamma B.

Data mining plays a major role in forecasting the open price details of the stock market. However, it fails to address the dimensionality and expectancy of a naive investor…

Abstract

Purpose

Data mining plays a major role in forecasting the open price details of the stock market. However, it fails to address the dimensionality and expectancy of a naive investor. Hence, this paper aims to study a future prediction model named time series model is implemented.

Design/methodology/approach

In this model, the stock market data are fed to the proposed deep neural networks (DBN), and the number of hidden neurons is optimized by the modified JAYA Algorithm (JA), based on the fitness function. Hence, the algorithm is termed as fitness-oriented JA (FJA), and the proposed model is termed as FJA-DBN. The primary objective of this open price forecasting model is the minimization of the error function between the modeled and actual output.

Findings

The performance analysis demonstrates that the deviation of FJA–DBN in predicting the open price details of the Tata Motors, Reliance Power and Infosys data shows better performance in terms of mean error percentage, symmetric mean absolute percentage error, mean absolute scaled error, mean absolute error, root mean square error, L1-norm, L2-Norm and Infinity-Norm (least infinity error).

Research limitations/implications

The proposed model can be used to forecast the open price details.

Practical implications

The investors are constantly reviewing past pricing history and using it to influence their future investment decisions. There are some basic assumptions used in this analysis, first being that everything significant about a company is already priced into the stock, other being that the price moves in trends

Originality/value

This paper presents a technique for time series modeling using JA. This is the first work that uses FJA-based optimization for stock market open price prediction.

Details

Kybernetes, vol. 49 no. 9
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 September 2001

H. Brauer, M. Ziolkowski and J. Haueisen

We applied minimum norm estimations using different regularization techniques to the solution of the biomagnetic inverse field problem. Using magnetic field data measured with a…

Abstract

We applied minimum norm estimations using different regularization techniques to the solution of the biomagnetic inverse field problem. Using magnetic field data measured with a multi‐channel‐SQUID‐sensor‐system we computed reconstruction of the impressed current density distributions which were generated by extended current sources placed inside a human torso phantom. The common inverse techniques usually applied in modern biomedical investigations in bioelectricity or biomagnetism are compared, and their aptitude for reconstruction of 3D current sources in space was evaluated. We analyzed the impact of using magnetic data, electrical data, and combination of both respectively on the localization of an equivalent current dipole (ECD). Finally, we use a visualization tool which enables a comparison of current density reconstruction. The study is, in parts, related to the new TEAM problem No. 31.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 20 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 19 September 2016

Ziqiang Cui, Qi Wang, Qian Xue, Wenru Fan, Lingling Zhang, Zhang Cao, Benyuan Sun, Huaxiang Wang and Wuqiang Yang

Electrical capacitance tomography (ECT) and electrical resistance tomography (ERT) are promising techniques for multiphase flow measurement due to their high speed, low cost…

1211

Abstract

Purpose

Electrical capacitance tomography (ECT) and electrical resistance tomography (ERT) are promising techniques for multiphase flow measurement due to their high speed, low cost, non-invasive and visualization features. There are two major difficulties in image reconstruction for ECT and ERT: the “soft-field”effect, and the ill-posedness of the inverse problem, which includes two problems: under-determined problem and the solution is not stable, i.e. is very sensitive to measurement errors and noise. This paper aims to summarize and evaluate various reconstruction algorithms which have been studied and developed in the word for many years and to provide reference for further research and application.

Design/methodology/approach

In the past 10 years, various image reconstruction algorithms have been developed to deal with these problems, including in the field of industrial multi-phase flow measurement and biological medical diagnosis.

Findings

This paper reviews existing image reconstruction algorithms and the new algorithms proposed by the authors for electrical capacitance tomography and electrical resistance tomography in multi-phase flow measurement and biological medical diagnosis.

Originality/value

The authors systematically summarize and evaluate various reconstruction algorithms which have been studied and developed in the word for many years and to provide valuable reference for practical applications.

Article
Publication date: 1 June 2001

Hartmut Brauer, Marek Ziolkowski, Uwe Tenner, Jens Haueisen and Hannes Nowak

Applies four different minimum norm estimations with common regularization techniques, often used in biomedical applications to the solution of the biomagnetic inverse field…

Abstract

Applies four different minimum norm estimations with common regularization techniques, often used in biomedical applications to the solution of the biomagnetic inverse field problem. Magnetic field data measured with a multi‐channel biomagnetometer sensor system in a magnetically shielded room were used to reconstruct the current density distributions generated by an extended current source which was placed inside a human torso phantom. No one of the tested methods is able to estimate the extension of the source. To improve the results as much as possible a priori information of the source space should be taken into account.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 20 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 7 February 2022

Muralidhar Vaman Kamath, Shrilaxmi Prashanth, Mithesh Kumar and Adithya Tantri

The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength…

Abstract

Purpose

The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength development. This study aims to predict the compressive strength of normal concrete and high-performance concrete using four datasets.

Design/methodology/approach

In this paper, five established individual Machine Learning (ML) regression models have been compared: Decision Regression Tree, Random Forest Regression, Lasso Regression, Ridge Regression and Multiple-Linear regression. Four datasets were studied, two of which are previous research datasets, and two datasets are from the sophisticated lab using five established individual ML regression models.

Findings

The five statistical indicators like coefficient of determination (R2), mean absolute error, root mean squared error, Nash–Sutcliffe efficiency and mean absolute percentage error have been used to compare the performance of the models. The models are further compared using statistical indicators with previous studies. Lastly, to understand the variable effect of the predictor, the sensitivity and parametric analysis were carried out to find the performance of the variable.

Originality/value

The findings of this paper will allow readers to understand the factors involved in identifying the machine learning models and concrete datasets. In so doing, we hope that this research advances the toolset needed to predict compressive strength.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 2
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 9 October 2018

F. Li, M. Soleimani and J. Abascal

Magnetic induction tomography (MIT) is a tomographic imaging technique with a wide range of potential industrial applications. Planar array MIT is a convenient setup but unable to…

Abstract

Purpose

Magnetic induction tomography (MIT) is a tomographic imaging technique with a wide range of potential industrial applications. Planar array MIT is a convenient setup but unable to access freely from the entire periphery as it only collects measurements from one surface, so it remains challenging given the limited data. This study aims to assess the use of sparse regularization methods for accurate position and depth detection in planar array MIT.

Design/methodology/approach

The most difficult challenges in MIT are to solve the inverse and forward problems. The inversion of planar MIT is severely ill-posed due to limited access data. Thus, this paper posed a total variation (TV) problem and solved it efficiently with the Split Bregman formulation to overcome this difficulty. Both isotropic and anisotropic TV formulations are compared to Tikhonov regularization with experimental MIT data.

Findings

The results show that Tikhonov method failed or underestimated the object position and depth. Both isotropic and anisotropic TV led to accurate recovery of depth and position.

Originality/value

There are numerous potential applications for planar array MIT where access to the materials under testing is restrict. Sparse regularization methods are a promising approach to improving depth detection for limited MIT data.

Details

Sensor Review, vol. 39 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of 11