Search results

1 – 10 of 433
Article
Publication date: 27 July 2021

Papangkorn Pidchayathanakorn and Siriporn Supratid

A major key success factor regarding proficient Bayes threshold denoising refers to noise variance estimation. This paper focuses on assessing different noise variance estimations…

Abstract

Purpose

A major key success factor regarding proficient Bayes threshold denoising refers to noise variance estimation. This paper focuses on assessing different noise variance estimations in three Bayes threshold models on two different characteristic brain lesions/tumor magnetic resonance imaging (MRIs).

Design/methodology/approach

Here, three Bayes threshold denoising models based on different noise variance estimations under the stationary wavelet transforms (SWT) domain are mainly assessed, compared to state-of-the-art non-local means (NLMs). Each of those three models, namely D1, GB and DR models, respectively, depends on the most detail wavelet subband at the first resolution level, on the entirely global detail subbands and on the detail subband in each direction/resolution. Explicit and implicit denoising performance are consecutively assessed by threshold denoising and segmentation identification results.

Findings

Implicit performance assessment points the first–second best accuracy, 0.9181 and 0.9048 Dice similarity coefficient (Dice), sequentially yielded by GB and DR; reliability is indicated by 45.66% Dice dropping of DR, compared against 53.38, 61.03 and 35.48% of D1 GB and NLMs, when increasing 0.2 to 0.9 noise level on brain lesions MRI. For brain tumor MRI under 0.2 noise level, it denotes the best accuracy of 0.9592 Dice, resulted by DR; however, 8.09% Dice dropping of DR, relative to 6.72%, 8.85 and 39.36% of D1, GB and NLMs is denoted. The lowest explicit and implicit denoising performances of NLMs are obviously pointed.

Research limitations/implications

A future improvement of denoising performance possibly refers to creating a semi-supervised denoising conjunction model. Such model utilizes the denoised MRIs, resulted by DR and D1 thresholding model as uncorrupted image version along with the noisy MRIs, representing corrupted version ones during autoencoder training phase, to reconstruct the original clean image.

Practical implications

This paper should be of interest to readers in the areas of technologies of computing and information science, including data science and applications, computational health informatics, especially applied as a decision support tool for medical image processing.

Originality/value

In most cases, DR and D1 provide the first–second best implicit performances in terms of accuracy and reliability on both simulated, low-detail small-size region-of-interest (ROI) brain lesions and realistic, high-detail large-size ROI brain tumor MRIs.

Article
Publication date: 1 October 2018

Vinod Nistane and Suraj Harsha

In rotary machines, the bearing failure is one of the major causes of the breakdown of machinery. The bearing degradation monitoring is a great anxiety for the prevention of…

Abstract

Purpose

In rotary machines, the bearing failure is one of the major causes of the breakdown of machinery. The bearing degradation monitoring is a great anxiety for the prevention of bearing failures. This paper aims to present a combination of the stationary wavelet decomposition and extra-trees regression (ETR) for the evaluation of bearing degradation.

Design/methodology/approach

The higher order cumulants features are extracted from the bearing vibration signals by using the stationary wavelet decomposition (stationary wavelet transform [SWT]). The extracted features are then subjected to the ETR for obtaining normal and failure state. A dominance level curve build using the dissimilarity data of test object and retained as health degradation indicator for the evaluation of bearing health.

Findings

Experiment conducts to verify and assess the effectiveness of ETR for the evaluation of performance of bearing degradation. To justify the preeminence of recommended approach, it is compared with the performance of random forest regression and multi-layer perceptron regression.

Originality/value

The experimental results indicated that the presently adopted method shows better performance for detecting the degradation more accurately at early stage. Furthermore, the diagnostics and prognostics have been getting much attention in the field of vibration, and it plays a significant role to avoid accidents.

Book part
Publication date: 14 December 2018

Ramazan Yildirim and Mansur Masih

The purpose of this chapter is to analyze the possible portfolio diversification opportunities between Asian Islamic market and other regions’ Islamic markets; namely USA, Europe…

Abstract

The purpose of this chapter is to analyze the possible portfolio diversification opportunities between Asian Islamic market and other regions’ Islamic markets; namely USA, Europe, and BRIC. This study makes the initial attempt to fill in the gaps of previous studies by focusing on the proxies of global Islamic markets to identify the correlations among those selected markets by employing the recent econometric methodologies such as multivariate generalized autoregressive conditional heteroscedastic–dynamic conditional correlations (MGARCH–DCC), maximum overlap discrete wavelet transform (MODWT), and the continuous wavelet transform (CWT). By utilizing the MGARCH-DCC, this chapter tries to identify the strength of the time-varying correlation among the markets. However, to see the time-scale-dependent nature of these mentioned correlations, the authors utilized CWT. For robustness, the authors have applied MODWT methodology as well. The findings tend to indicate that the Asian investors have better portfolio diversification opportunities with the US markets, followed by the European markets. BRIC markets do not offer any portfolio diversification benefits, which may be explained partly by the fact that the Asian markets cover partially the same countries of BRIC markets, namely India and China. Considering the time horizon dimension, the results narrow down the portfolio diversification opportunities only to the short-term investment horizons. The very short-run investors (up to eight days only) can benefit through portfolio diversification, especially in the US and European markets. The above-mentioned results have policy implications for the Asian Islamic investors (e.g., Portfolio Management and Strategic Investment Management).

Article
Publication date: 26 November 2021

K. Upendra Raju and N. Amutha Prabha

Steganography is a data hiding technique used in the data security. while transmission of data through channel, no guarantee that the data is transmitted safely or not. Variety of…

Abstract

Purpose

Steganography is a data hiding technique used in the data security. while transmission of data through channel, no guarantee that the data is transmitted safely or not. Variety of data security techniques exists such as patch work, low bit rate data hiding, lossy compression etc. This paper aims to increase the security and robustness.

Design/methodology/approach

This paper describes, an approach for multiple images steganography that is oriented on the combination of lifting wavelet transform (LWT) and discrete cosine transform (DCT). Here, we have one cover image and two secret images. The cover image is applied with one of the different noises like Gaussian, Salt & Pepper, Poisson, and speckle noises and converted into different color spaces of YCbCr, HSV, and Lab.

Findings

Due to the vast development of Internet access and multimedia technology, it becomes very simple to hack and trace secret information. Using this steganography process in reversible data hiding (RDH) helps to prevent secret information.

Originality/value

We can divide the color space converted image into four sub-bands of images by using lifting wavelet transform. By selecting lower bands, the discrete cosine transform is computed for hiding two secret images into the cover image and again one of the transformed secret images is converted by using Arnold transform to get the encrypted/embedded/encoded image. To extract the Stego image, we can apply the revertible operation. For comparing the results, we can calculate PSNR, SSIM, and MSE values by applying the same process for all color spaces of YCbCr, HSV, and Lab. The experimental results give better performance when compared to all other spaces.

Details

International Journal of Intelligent Unmanned Systems, vol. 11 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 5 June 2020

Hiren Mewada, Amit V. Patel, Jitendra Chaudhari, Keyur Mahant and Alpesh Vala

In clinical analysis, medical image segmentation is an important step to study the anatomical structure. This helps to diagnose and classify abnormality in the image. The wide…

Abstract

Purpose

In clinical analysis, medical image segmentation is an important step to study the anatomical structure. This helps to diagnose and classify abnormality in the image. The wide variations in the image modality and limitations in the acquisition process of instruments make this segmentation challenging. This paper aims to propose a semi-automatic model to tackle these challenges and to segment medical images.

Design/methodology/approach

The authors propose Legendre polynomial-based active contour to segment region of interest (ROI) from the noisy, low-resolution and inhomogeneous medical images using the soft computing and multi-resolution framework. In the first phase, initial segmentation (i.e. prior clustering) is obtained from low-resolution medical images using fuzzy C-mean (FCM) clustering and noise is suppressed using wavelet energy-based multi-resolution approach. In the second phase, resultant segmentation is obtained using the Legendre polynomial-based level set approach.

Findings

The proposed model is tested on different medical images such as x-ray images for brain tumor identification, magnetic resonance imaging (MRI), spine images, blood cells and blood vessels. The rigorous analysis of the model is carried out by calculating the improvement against noise, required processing time and accuracy of the segmentation. The comparative analysis concludes that the proposed model withstands the noise and succeeds to segment any type of medical modality achieving an average accuracy of 99.57%.

Originality/value

The proposed design is an improvement to the Legendre level set (L2S) model. The integration of FCM and wavelet transform in L2S makes model insensitive to noise and intensity inhomogeneity and hence it succeeds to segment ROI from a wide variety of medical images even for the images where L2S failed to segment them.

Details

Engineering Computations, vol. 37 no. 9
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 14 August 2017

Julius Owowo and S. Olutunde Oyadiji

The purpose of this paper is to employ the acoustic wave propagation method for leakage detection in pipes. The first objective is to use acoustic finite element analysis (AFEA…

Abstract

Purpose

The purpose of this paper is to employ the acoustic wave propagation method for leakage detection in pipes. The first objective is to use acoustic finite element analysis (AFEA) method to simulate acoustic wave propagation and acoustic wave reflectometry in an intact pipe and in pipes with leaks of various sizes. This is followed by the second objective which is to validate the effectiveness and the practicability of the acoustic wave method via experimental testing. The third objective involves the decomposition and de-noising of the measured acoustic waves using stationary wavelet transform (SWT). It is shown that this approach, which is used for the first time on leakage detection in pipes, can be used to identify, locate and estimate the size of a leakage defect in a pipe.

Design/methodology/approach

The research work was designed inline with best practices and acceptable standards. The research methodology focusses on five basic areas: literature review; experimental measurements; simulations; data analysis and writing-up of the study with clear-cut communication of the findings. The approach used was acoustic wave propagation-based method in conjunction with SWT for leakage detection in fluid-filled pipe.

Findings

First, the simulation of acoustic wave propagation and acoustic wave reflectometry in fluid-filled pipes with and without leakage have great potential in leakage detection in pipeline systems and can detect very small leaks of 1 mm diameter. Second, the measured noise-contaminated acoustic wave propagation in a fluid-filled pipe can be successfully de-noised using the SWT method in order to clearly identify and locate leakage as little as 5 mm diameter in a pipe. Third, AFEA of a fluid-filled pipe can be achieved with the simulation of only the fluid content of the pipe and without the inclusion of the pipe in the model. This eliminates contact interaction of the solid pipe walls and the fluid, and as a consequence reduces computational time and resources. Fourth, the relationship of the ratio of the leakage diameter to the ratio of the first and second secondary wave amplitudes caused by the leakage can be represented by a second-order polynomial function. Fifth, the identification of leakage in a pipe is intuitive from mere comparison of the acoustic waveforms of an intact pipe with that of a pipe with a leakage.

Originality/value

The research work is a novelty and was developed from the scratch. The AFEA of acoustic wave propagation and acoustic wave reflectometry in a static fluid-filled pipe, and the SWT method have been used for the first time to detect, locate and estimate the size of a leakage in a fluid-filled pipe.

Details

International Journal of Structural Integrity, vol. 8 no. 4
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 29 October 2021

Sai Bharadwaj B. and Sumanth Kumar Chennupati

The purpose of this manuscript is to detect heart fault using Electrocardiogram. Mutually low and high frequency noises such as electromyography (EMG) and power line interference…

Abstract

Purpose

The purpose of this manuscript is to detect heart fault using Electrocardiogram. Mutually low and high frequency noises such as electromyography (EMG) and power line interference (PLI) degrades the performance of ECG signals.

Design/methodology/approach

The ECG record depicts the procedural electrical movement of the heart, which is non-invasive foot age obtained by placing surface electrodes on designated locations of the patient’s skin. The main concept of this manuscript is to present a novel filtering method to cancel the unwanted noises in ECG signal. Here, intrinsic time scale decomposition (ITD) is introduced to suppress the effect of PLI from ECG signals.

Findings

In the existing ITD, the gain control parameter is a constant value; however, in this paper it is an adaptive feature that varies according to certain constraints. Simulation outcomes show that the proposed method effectively reduces the effect of PLI and quantitatively express the effectiveness with different evaluation metrics.

Originality/value

The results found by the proposed method are compared with Fourier decomposition technique and eigen value decomposition methods (EDM) to validate the effectiveness of the proposed method.

Details

Journal of Engineering, Design and Technology , vol. 21 no. 6
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 13 August 2018

Habiba Abdessalem and Saloua Benammou

The purpose of this paper is to apply the wavelet thresholding technique in order to analyze economic socio-political situations in Tunisia using textual data sets. This technique…

Abstract

Purpose

The purpose of this paper is to apply the wavelet thresholding technique in order to analyze economic socio-political situations in Tunisia using textual data sets. This technique is used to remove noise from contingency table. A comparative study is done on correspondence analysis and classification results (using k-means algorithm) before and after denoising.

Design/methodology/approach

Textual data set is collected from an electronic newspaper that offers actual economic news about Tunisia. Both the hard and the soft-thresholding techniques are applied based on various Daubechies wavelets with different vanishing moments.

Findings

The results obtained have proved the effectiveness of wavelet denoising method in textual data analysis. On one hand, this technique allowed reducing the loss of information generated by correspondence analysis, ensured a better quality of representation of the factorial plan, neglected the interest of lemmatization in textual analysis and improved the results of classification by k-means algorithm. On the other hand, the proximities provided by the factorial visualization validate the economic situation of Tunisia during the studied period showing mainly a stable situation before the revolution and a deteriorated one after the revolution.

Originality/value

The results are the first to analyze economic socio-political relations using textual data. The originality of this paper comes also from the joint use of correspondence analysis and wavelet thresholding in textual data analysis.

Details

Journal of Economic Studies, vol. 45 no. 3
Type: Research Article
ISSN: 0144-3585

Keywords

Article
Publication date: 31 December 2021

Praveen Kumar Lendale and N.M. Nandhitha

Speckle noise removal in ultrasound images is one of the important tasks in biomedical-imaging applications. Many filtering -based despeckling methods are discussed in many…

Abstract

Purpose

Speckle noise removal in ultrasound images is one of the important tasks in biomedical-imaging applications. Many filtering -based despeckling methods are discussed in many existing works. Two-dimensional (2-D) transforms are also used enormously for the reduction of speckle noise in ultrasound medical images. In recent years, many soft computing-based intelligent techniques have been applied to noise removal and segmentation techniques. However, there is a requirement to improve the accuracy of despeckling using hybrid approaches.

Design/methodology/approach

The work focuses on double-bank anatomy with framelet transform combined with Gaussian filter (GF) and also consists of a fuzzy kind of clustering approach for despeckling ultrasound medical images. The presented transform efficiently rejects the speckle noise based on the gray scale relative thresholding where the directional filter group (DFB) preserves the edge information.

Findings

The proposed approach is evaluated by different performance indicators such as the mean square error (MSE), peak signal to noise ratio (PSNR) speckle suppression index (SSI), mean structural similarity and the edge preservation index (EPI) accordingly. It is found that the proposed methodology is superior in terms of all the above performance indicators.

Originality/value

Fuzzy kind clustering methods have been proved to be better than the conventional threshold methods for noise dismissal. The algorithm gives a reconcilable development as compared to other modern speckle reduction procedures, as it preserves the geometric features even after the noise dismissal.

Details

International Journal of Intelligent Unmanned Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 3 October 2016

Hassan Samami and S. Olutunde Oyadiji

The purpose of this paper is to employ analytical and numerical techniques to generate modal displacement data of damaged beams containing very small crack-like surface flaws or…

Abstract

Purpose

The purpose of this paper is to employ analytical and numerical techniques to generate modal displacement data of damaged beams containing very small crack-like surface flaws or slots and to use the data in the development of damage detection methodology. The detection method involves the use of double differentiation of the modal data for identification of the flaw location and magnitude.

Design/methodology/approach

The modal displacements of damaged beams are simulated analytically using the Bernoulli-Euler theory and numerically using the finite element method. The principle used in the analytical approach is based on changes in the transverse displacement due to the localized reduction of the flexural rigidity of the beam. Curvature analysis is employed to identify and locate the structural flaws from the modal data. The curvature mode shapes are calculated using a central difference approximation. The effects of random noise on the detectability of the structural flaws are also computed.

Findings

The analytical approach is much more robust in simulating modal displacement data for beams with crack-like surface flaws or slots than the finite element analysis (FEA) approach especially for crack-like surface flaws or slots of very small depths. The structural flaws are detectable in the presence of random noise of up to 5 per cent.

Originality/value

Simulating the effects of small crack-like surface flaws is important because it is essential to develop techniques to detect cracks at an early stage of their development. The FEA approach can only simulate the effects of crack-like surface flaws or slots with depth ratio greater than 10 per cent. On the other hand, the analytical approach using the Bernoulli-Euler theory can simulate the effects of crack-like surface flaws or slots with depth ratio as small as 2 per cent.

1 – 10 of 433