Search results

1 – 10 of 89
Article
Publication date: 3 May 2013

Valeri Kontorovich and Zinaida Lovtchikova

The purpose of this paper is to provide the results of investigation of multi‐moment statistical characteristics of chaos and apply them to improve the accuracy of nonlinear…

Abstract

Purpose

The purpose of this paper is to provide the results of investigation of multi‐moment statistical characteristics of chaos and apply them to improve the accuracy of nonlinear algorithms for chaos filtering for real‐time applications.

Design/methodology/approach

The approach to find multi‐moment statistical properties of chaos‐multi‐moment cumulant (covariance) functions of higher order is a generalization of the previously proposed (by the authors) “degenerated cumulant equations” method. Those multi‐moment cumulants functions are applied in the generalization of the Stratonovich‐Kushner equations (SKE) for the optimum algorithm of nonlinear filtering of chaos as well as for synthesis of the quasi‐optimum algorithms.

Findings

Results are presented to investigate the multi‐moment statistical properties of chaos and formulate the theoretical background for synthesis of multi‐moment optimum and quasi‐optimum algorithms for nonlinear filtering of chaos with the improved accuracy in the presence of additive white noise.

Originality/value

The paper presents new theoretical results of the statistical description of chaos, previously partially reported only from experimental studies. A novel approach for chaos filtering is also presented. The proposed approach is dedicated to further improvement of the filtering accuracy for the case of low (less than one) SNR scenarios and is important for implementation in real‐time processing. As an important practical example, the new modified EKF algorithm is proposed with the rather opportunistic characteristics of the filtering fidelity together with algorithm complexity – practically the same as the “classic” one‐moment EKF algorithm.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 February 2001

J.V. ANDERSEN and D. SORNETTE

In the real world, the variance of portfolio returns provides only a limited quantification of incurred risks, as the distributions of returns have “fat tails” and the dependence…

Abstract

In the real world, the variance of portfolio returns provides only a limited quantification of incurred risks, as the distributions of returns have “fat tails” and the dependence between assets are only imperfectly accounted for by the correlation matrix. Value‐at‐risk and other measures of risks have been developed to account for the larger moves allowed by non‐Gaussian distributions. In this article, the authors distinguish “small” risks from “large” risks, in order to suggest an alternative approach to portfolio optimization that simultaneously increases portfolio returns while minimizing the risk of low frequency, high severity events. This approach treats the variance or second‐order cumulant as a measure of “small” risks. In contrast, higher even‐order cumulants, starting with the fourth‐order cumulant, quantify the “large” risks. The authors employ these estimates of portfolio cumulants based on fat‐tailed distributions to rebalance portfolio exposures to mitigate large risks.

Details

The Journal of Risk Finance, vol. 2 no. 3
Type: Research Article
ISSN: 1526-5943

Article
Publication date: 6 March 2017

Zhiwei Kang, Xin He, Jin Liu and Tianyuan Tao

The authors proposed a new method of fast time delay measurement for integrated pulsar pulse profiles in X-ray pulsar-based navigation (XNAV). As a basic observation of exact…

Abstract

Purpose

The authors proposed a new method of fast time delay measurement for integrated pulsar pulse profiles in X-ray pulsar-based navigation (XNAV). As a basic observation of exact orientation in XNAV, time of arrival (TOA) can be obtained by time delay measurement of integrated pulsar pulse profiles. Therefore, the main purpose of the paper is to establish a method with fast time delay measurement on the condition of limited spacecraft’s computing resources.

Design/methodology/approach

Given that the third-order cumulants can suppress the Gaussian noise and reduce calculation to achieve precise and fast positioning in XNAV, the proposed method sets the third-order auto-cumulants of standard pulse profile, the third-order cross-cumulants of the standard and the observed pulse profile as basic variables and uses the cross-correlation function of these two variables to estimate the time delay of integrated pulsar pulse profiles.

Findings

The proposed method is simple, fast and has high accuracy in time delay measurement for integrated pulsar pulse profiles. The result shows that compared to the bispectrum algorithm, the method improves the precision of the time delay measurement and reduced the computation time significantly as well.

Practical implications

To improve the performance of time delay estimation in XNAV systems, the authors proposed a novel method for XNAV to achieve precise and fast positioning.

Originality/value

Compared to the bispectrum algorithm, the proposed method can improve the speed and precision of the TOA’s calculation effectively by using the cross-correlation function of integrated pulsar pulse profile’s third-order cumulants instead of Fourier transform in bispectrum algorithm.

Details

Aircraft Engineering and Aerospace Technology, vol. 89 no. 2
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 3 January 2017

Meghdad Tourandaz Kenari, Mohammad Sadegh Sepasian and Mehrdad Setayesh Nazar

The purpose of this paper is to present a new cumulant-based method, based on the properties of saddle-point approximation (SPA), to solve the probabilistic load flow (PLF…

Abstract

Purpose

The purpose of this paper is to present a new cumulant-based method, based on the properties of saddle-point approximation (SPA), to solve the probabilistic load flow (PLF) problem for distribution networks with wind generation.

Design/methodology/approach

This technique combines cumulant properties with the SPA to improve the analytical approach of PLF calculation. The proposed approach takes into account the load demand and wind generation uncertainties in distribution networks, where a suitable probabilistic model of wind turbine (WT) is used.

Findings

The proposed procedure is applied to IEEE 33-bus distribution test system, and the results are discussed. The output variables, with and without WT connection, are presented for normal and gamma random variables (RVs). The case studies demonstrate that the proposed method gives accurate results with relatively low computational burden even for non-Gaussian probability density functions.

Originality/value

The main contribution of this paper is the use of SPA for the reconstruction of probability density function or cumulative distribution function in the PLF problem. To confirm the validity of the method, results are compared with Monte Carlo simulation and Gram–Charlier expansion results. From the viewpoint of accuracy and computational cost, SPA almost surpasses other approximations for obtaining the cumulative distribution function of the output RVs.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 36 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 October 1998

T. Lobos, Z. Leonowicz, J. Szymanda and P. Ruczewski

During recent years, higher order statistics (HOS) have found a wide applicability in many diverse fields, e.g. biomedicine, harmonic retrieval and adaptive filtering. In power…

Abstract

During recent years, higher order statistics (HOS) have found a wide applicability in many diverse fields, e.g. biomedicine, harmonic retrieval and adaptive filtering. In power spectrum estimation, the signal under consideration is processed in such a way that the distribution of power among its frequency is estimated and phase relations between the frequency components are suppressed. Higher order statistics and their associated Fourier transforms reveal not only amplitude information about a signal, but also phase information. If a non‐Gaussian signal is received along with additive Gaussian noise, a transformation to higher order cumulant domain eliminates the noise. These are some methods for estimation of signal components, based on HOS. In the paper we apply the MUSIC method both for the correlation and the fourth order cumulant, to investigate the state of asynchronous running of synchronous machines and the fault operation of inverter‐fed induction motors. When the investigated signal is distorted by a coloured noise, more exact results can be achieved by applying cumulants.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 17 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 21 February 2022

Kwangil Bae

The author investigates realized comoments that overcome the drawback of conventional ones and derive the following findings. First, the author proves that (even generalized…

Abstract

The author investigates realized comoments that overcome the drawback of conventional ones and derive the following findings. First, the author proves that (even generalized) geometric implied lower-order comoments yield neither geometric realized third comoment nor fourth moment. This is in contrast to previous studies that produce geometric realized third moment and arithmetic realized higher-order moments through lower-order implied moments. Second, arithmetic realized joint cumulants are obtained through complete Bell polynomials of lower-order joint cumulants. This study’s realized measures are unbiased estimators and they can, therefore, overcome the drawbacks of conventional realized measures.

Details

Journal of Derivatives and Quantitative Studies: 선물연구, vol. 30 no. 2
Type: Research Article
ISSN: 1229-988X

Keywords

Article
Publication date: 1 October 2018

Vinod Nistane and Suraj Harsha

In rotary machines, the bearing failure is one of the major causes of the breakdown of machinery. The bearing degradation monitoring is a great anxiety for the prevention of…

Abstract

Purpose

In rotary machines, the bearing failure is one of the major causes of the breakdown of machinery. The bearing degradation monitoring is a great anxiety for the prevention of bearing failures. This paper aims to present a combination of the stationary wavelet decomposition and extra-trees regression (ETR) for the evaluation of bearing degradation.

Design/methodology/approach

The higher order cumulants features are extracted from the bearing vibration signals by using the stationary wavelet decomposition (stationary wavelet transform [SWT]). The extracted features are then subjected to the ETR for obtaining normal and failure state. A dominance level curve build using the dissimilarity data of test object and retained as health degradation indicator for the evaluation of bearing health.

Findings

Experiment conducts to verify and assess the effectiveness of ETR for the evaluation of performance of bearing degradation. To justify the preeminence of recommended approach, it is compared with the performance of random forest regression and multi-layer perceptron regression.

Originality/value

The experimental results indicated that the presently adopted method shows better performance for detecting the degradation more accurately at early stage. Furthermore, the diagnostics and prognostics have been getting much attention in the field of vibration, and it plays a significant role to avoid accidents.

Article
Publication date: 17 March 2014

Vassilis Polimenis and Ioannis Papantonis

This paper aims to enhance a co-skew-based risk measurement methodology initially introduced in Polimenis, by extending it for the joint estimation of the jump betas for two…

Abstract

Purpose

This paper aims to enhance a co-skew-based risk measurement methodology initially introduced in Polimenis, by extending it for the joint estimation of the jump betas for two stocks.

Design/methodology/approach

The authors introduce the possibility of idiosyncratic jumps and analyze the robustness of the estimated sensitivities when two stocks are jointly fit to the same set of latent jump factors. When individual stock skews substantially differ from those of the market, the requirement that the individual skew is exactly matched is placing a strain on the single stock estimation system.

Findings

The authors argue that, once the authors relax this restrictive requirement in an enhanced joint framework, the system calibrates to a more robust solution in terms of uncovering the true magnitude of the latent parameters of the model, at the same time revealing information about the level of idiosyncratic skews in individual stock return distributions.

Research limitations/implications

Allowing for idiosyncratic skews relaxes the demands placed on the estimation system and hence improves its explanatory power by focusing on matching systematic skew that is more informational. Furthermore, allowing for stock-specific jumps that are not related to the market is a realistic assumption. There is now evidence that idiosyncratic risks are priced as well, and this has been a major drawback and criticism in using CAPM to assess risk premia.

Practical implications

Since jumps in stock prices incorporate the most valuable information, then quantifying a stock's exposure to jump events can have important practical implications for financial risk management, portfolio construction and option pricing.

Originality/value

This approach boosts the “signal-to-noise” ratio by utilizing co-skew moments, so that the diffusive component is filtered out through higher-order cumulants. Without making any distributional assumptions, the authors are able not only to capture the asymmetric sensitivity of a stock to latent upward and downward systematic jump risks, but also to uncover the magnitude of idiosyncratic stock skewness. Since cumulants in a Levy process evolve linearly in time, this approach is horizon independent and hence can be deployed at all frequencies.

Details

The Journal of Risk Finance, vol. 15 no. 2
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 1 June 2005

Linh Tran Hoai and Stanislaw Osowski

This paper presents new approach to the integration of neural classifiers. Typically only the best trained network is chosen, while the rest is discarded. However, combining the…

Abstract

Purpose

This paper presents new approach to the integration of neural classifiers. Typically only the best trained network is chosen, while the rest is discarded. However, combining the trained networks helps to integrate the knowledge acquired by the component classifiers and in this way improves the accuracy of the final classification. The aim of the research is to develop and compare the methods of combining neural classifiers of the heart beat recognition.

Design/methodology/approach

Two methods of integration of the results of individual classifiers are proposed. One is based on the statistical reliability of post‐processing performance on the trained data and the second uses the least mean square method in adjusting the weights of the weighted voting integrating network.

Findings

The experimental results of the recognition of six types of arrhythmias and normal sinus rhythm have shown that the performance of individual classifiers could be improved significantly by the integration proposed in this paper.

Practical implications

The presented application should be regarded as the first step in the direction of automatic recognition of the heart rhythms on the basis of the registered ECG waveforms.

Originality/value

The results mean that instead of designing one high performance classifier one can build a number of classifiers, each of not superb performance. The appropriate combination of them may produce a performance of much higher quality.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 24 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 4 January 2016

Nianyun Liu, Jingsong Li, Quan Liu, Hang Su and Wei Wu

Higher order statistics (HOS)-based blind source separation (BSS) technique has been applied to separate data to obtain a better performance than second order statistics-based…

Abstract

Purpose

Higher order statistics (HOS)-based blind source separation (BSS) technique has been applied to separate data to obtain a better performance than second order statistics-based method. The cost function constructed from the HOS-based separation criterion is a complicated nonlinear function that is difficult to optimize. The purpose of this paper is to effectively solve this nonlinear optimization problem to obtain an estimation of the source signals with a higher accuracy than classic BSS methods.

Design/methodology/approach

In this paper, a new technique based on HOS in kernel space is proposed. The proposed approach first maps the mixture data into a high-dimensional kernel space through a nonlinear mapping and then constructs a cost function based on a higher order separation criterion in the kernel space. The cost function is constructed by using the kernel function which is defined as inner products between the images of all pairs of data in the kernel space. The estimations of the source signals is obtained through the minimizing the cost function.

Findings

The results of a number of experiments on generic synthetic and real data show that HOS separation criterion in kernel space exhibits good performance for different kinds of distributions. The proposed method provided higher signal-to-interference ratio and less sensitive to the source distribution compared to FastICA and JADE algorithms.

Originality/value

The proposed method combines the advantage of kernel method and the HOS properties to achieve a better performance than using a single one. It does not require to compute the coordinates of the data in the kernel space explicitly, but computes the kernel function which is simple to optimize. The use of nonlinear function space allows the algorithm more accurate and more robust to different kinds of distributions.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 35 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of 89