Search results
1 – 10 of 108The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent…
Abstract
Purpose
The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent, accuracy and efficiency are two inevitable considerations. The main purpose of this study is to raise the execution efficiency of the existing estimators, especially the fast maximum likelihood estimator (MLE), which has optimal accuracy.
Design/methodology/approach
A two-stage procedure combining a quicker method and a more accurate one to estimate the Hurst exponent from a large to small range will be developed. For the best possible accuracy, the data-induction method is currently ideal for the first-stage estimator and the fast MLE is the best candidate for the second-stage estimator.
Findings
For signals modeled as discrete-time fractional Gaussian noise, the proposed two-stage estimator can save up to 41.18 per cent the computational time of the fast MLE while remaining almost as accurate as the fast MLE, and even for signals modeled as discrete-time fractional Brownian motion, it can also save about 35.29 per cent except for smaller data sizes.
Originality/value
The proposed two-stage estimation procedure is a novel idea. It can be expected that other fields of parameter estimation can apply the concept of the two-stage estimation procedure to raise computational performance while remaining almost as accurate as the more accurate of two estimators.
Details
Keywords
The purpose of this paper is to test the efficient market hypothesis for major Indian sectoral indices by means of long memory approach in both time domain and frequency…
Abstract
Purpose
The purpose of this paper is to test the efficient market hypothesis for major Indian sectoral indices by means of long memory approach in both time domain and frequency domain. This paper also tests the accuracy of the detrended fluctuation analysis (DFA) approach and the local Whittle (LW) approach by means of Monte Carlo simulation experiments.
Design/methodology/approach
The author applies the DFA approach for the computation of the scaling exponent in the time domain. The robustness of the results is tested by the computation of the scaling exponent in the frequency domain by means of the LW estimator. The author applies moving sub-sample approach on DFA to study the evolution of market efficiency in Indian sectoral indices.
Findings
The Monte Carlo simulation experiments indicate that the DFA approach and the LW approach provides good estimates of the scaling exponent as the sample size increases. The author also finds that the efficiency characteristics of Indian sectoral indices and their stages of development are dynamic in nature.
Originality/value
This paper has both methodological and empirical originality. On the methodological side, the author tests the small sample properties of the DFA and the LW approaches by using simulated series of fractional Gaussian noise and find that both the approach possesses superior properties in terms of capturing the scaling behavior of asset prices. On the empirical side, the author studies the evolution of long-range dependence characteristics in Indian sectoral indices.
Details
Keywords
The purpose of this paper is to introduce the performance of traditional collision resolution algorithm (CRA) under self‐similar traffic and present a prediction‐based CRA…
Abstract
Purpose
The purpose of this paper is to introduce the performance of traditional collision resolution algorithm (CRA) under self‐similar traffic and present a prediction‐based CRA for wireless media access. With experimentations, the method is there after evaluated.
Design/methodology/approach
The traditional traffic models are mostly based on “Poisson” model or “Bernoulli” process, but recent decade traffic measurements found the coexistence of both long‐ and short‐range dependence in network traffic. On the other hand, CRA is an effective strategy to improve the performance of multiple access protocol and it achieves the highest capacity among all known multiple access protocols under the Poisson traffic model. In this paper, a CRA model is built on OPNET to study the effects of different traffic traces such as the fractional autoregressive integrated moving average process with non‐Gaussian white driving sequence and the real traffic data that are captured at a well‐attended ACM Conference. The performance is compared of traditional CRA based on the self‐similar traffic model and Poisson model and a novel CRA based on time – series prediction theory under self‐similar traffic models is designed.
Findings
The traditional Poisson traffic model gets the best performance under traditional CRA, while the self‐similar traffic performance under traditional CRA is too poor to be applied in an actual network environment. For example, the Poisson traffic model obtains the biggest throughput, the smallest delay and smallest collision resolution numbers under traditional CRA. This paper demonstrates the first‐come first‐serve (FCFS) must be improved under self‐similar traffic models. The novel collision resolution strategy‐based prediction can provide better performance under self‐similar traffic. The parameters of performance, such as throughput, delay and collision resolution numbers, are better than the traditional CRA.
Originality/value
This paper presents a prediction‐based CRA for self‐similar traffic which is the combination of the FCFS and the prediction theory and also provides a new method to resolve packets colliding.
Details
Keywords
David G. McMillan and Pako Thupayagale
In order to assess the informational efficiency of African equity markets (AEMs), the purpose of this paper is to examine long memory in both equity returns and volatility…
Abstract
Purpose
In order to assess the informational efficiency of African equity markets (AEMs), the purpose of this paper is to examine long memory in both equity returns and volatility using auto‐regressive fractionally integrated moving average (ARFIMA)‐FIGARCH/hyperbolic GARCH (HYGARCH) models.
Design/methodology/approach
In order to test for long memory, the behaviour of the auto‐correlation function for 11 AEMs is examined. Following the graphical analysis, the authors proceed to estimate ARFIMA‐FIGARCH and ARFIMA‐HYGARCH models, specifically designed to capture long‐memory dynamics.
Findings
The results show that these markets (largely) display a predictable component in returns; while evidence of long memory in volatility is very mixed. In comparison, results from the control of the UK and USA show short memory in returns while evidence of long memory in volatility is mixed. These results show that the behaviour of equity market returns and risks are dissimilar across markets and this may have implications for portfolio diversification and risk management strategies.
Practical implications
The results of the analysis may have important implications for portfolio diversification and risk management strategies.
Originality/value
The importance of this paper lies in it being the first to systematically analyse long‐memory dynamics for a range of AEMs. African markets are becoming increasingly important as a source of international portfolio diversification and risk management. Hence, the results here have implication for the conduct of international portfolio building, asset pricing and hedging.
Details
Keywords
To define the main elements of a formal calculus which deals with fractional Brownian motion (fBm), and to examine its prospects of applications in systems science.
Abstract
Purpose
To define the main elements of a formal calculus which deals with fractional Brownian motion (fBm), and to examine its prospects of applications in systems science.
Design/methodology/approach
The approach is based on a generalization of the Maruyama's notation. The key is the new Taylor's series of fractional order f(x+h)=Eα(hαDα)f(x), where Eα( · ) is the Mittag‐Leffler function.
Findings
As illustrative applications of this formal calculus in systems science, one considers the linear quadratic Gaussian problem with fractal noises, the analysis of the equilibrium position of a system disturbed by a local fractal time, and a model of growing which involves fractal noises. And then, one examines what happens when one applies the maximum entropy principle to systems involving fBms (or shortly fractals).
Research limitations/implications
The framework of this paper is applied mathematics and engineering mathematics, and the results so obtained allow the practical analysis of stochastic dynamics subject to fractional noises.
Practical implications
The direct prospect of application of this approach is the analysis of some stock markets dynamics and some biological systems.
Originality/value
The fractional Taylor's series is new and thus so are all its implications.
Details
Keywords
Harald Kinateder and Niklas Wagner
– The paper aims to model multiple-period market risk forecasts under long memory persistence in market volatility.
Abstract
Purpose
The paper aims to model multiple-period market risk forecasts under long memory persistence in market volatility.
Design/methodology/approach
The paper proposes volatility forecasts based on a combination of the GARCH(1,1)-model with potentially fat-tailed and skewed innovations and a long memory specification of the slowly declining influence of past volatility shocks. As the square-root-of-time rule is known to be mis-specified, the GARCH setting of Drost and Nijman is used as benchmark model. The empirical study of equity market risk is based on daily returns during the period January 1975 to December 2010. The out-of-sample accuracy of VaR predictions is studied for 5, 10, 20 and 60 trading days.
Findings
The long memory scaling approach remarkably improves VaR forecasts for the longer horizons. This result is only in part due to higher predicted risk levels. Ex post calibration to equal unconditional VaR levels illustrates that the approach also enhances efficiency in allocating VaR capital through time.
Practical implications
The improved VaR forecasts show that one should account for long memory when calibrating risk models.
Originality/value
The paper models single-period returns rather than choosing the simpler approach of modeling lower-frequency multiple-period returns for long-run volatility forecasting. The approach considers long memory in volatility and has two main advantages: it yields a consistent set of volatility predictions for various horizons and VaR forecasting accuracy is improved.
Details
Keywords
The theory which is addressed in this paper is that we should not be surprised to come across fractals in the analysis of some dynamic systems involving human factors…
Abstract
The theory which is addressed in this paper is that we should not be surprised to come across fractals in the analysis of some dynamic systems involving human factors. Moreover, in substance, fractals in human behaviour is acceptable. For the convenience of the reader a primary background to some models of fractional Brownian motions which can be found in the literature is given, and then the main features of the complex‐valued model, via a random walk in the complex plane, recently introduced by the author are recalled. The practical meaning of the model is exhibited. The parallel of the central limit theorem here is Levy’s stability. If it is supposed that human decision‐makers work via an observation process which combines the Heisenberg principle and a quantization principle in the measurement, then fractal dynamics appears to be quite in order. The relation with the theory of relative information is exhibited. The conjecture is then the following: could this model explain why fractals appear in finance, for instance?
Details
Keywords
Calum G. Turvey and Paitoon Wongsasutthikul
The purpose of this paper is to argue that a stationary-differenced autoregressive (AR) process with lag greater than 1, AR(q > 1), has certain properties that are…
Abstract
Purpose
The purpose of this paper is to argue that a stationary-differenced autoregressive (AR) process with lag greater than 1, AR(q > 1), has certain properties that are consistent with a fractional Brownian motion (fBm). What the authors are interested in is the investigation of approaches to identifying the existence of persistent memory of one form or another for the purposes of simulating commodity (and other asset) prices. The authors show in theory, and with application to agricultural commodity prices the relationship between AR(q) and quasi-fBm.
Design/methodology/approach
In this paper the authors develop mathematical relationships in support of using AR(q > 1) processes for simulating quasi-fBm.
Findings
From theory the authors show that any AR(q) process is a stationary, self-similar process, with a lag structure that captures the essential elements of scaling and a fractional power law. The authors illustrate through various means the approach, and apply the quasi-fractional AR(q) process to agricultural commodity prices.
Research limitations/implications
While the results can be applied to most time series of commodity prices, the authors limit the evaluation to the Gaussian case. Thus the approach does not apply to infinite-variance models.
Practical implications
The approach to using the structure of an AR(q > 1) model to simulate quasi-fBm is a simple approach that can be applied with ease using conventional Monte Carlo methods.
Originality/value
The authors believe that the approach to simulating quasi-fBm using standard AR(q > 1) models is original. The approach is intuitive and can be applied easily.
Details
Keywords
Yonghong Zhang, Shuhua Mao and Yuxiao Kang
With the massive use of fossil energy polluting the natural environment, clean energy has gradually become the focus of future energy development. The purpose of this…
Abstract
Purpose
With the massive use of fossil energy polluting the natural environment, clean energy has gradually become the focus of future energy development. The purpose of this article is to propose a new hybrid forecasting model to forecast the production and consumption of clean energy.
Design/methodology/approach
Firstly, the memory characteristics of the production and consumption of clean energy were analyzed by the rescaled range analysis (R/S) method. Secondly, the original series was decomposed into several components and residuals with different characteristics by the ensemble empirical mode decomposition (EEMD) algorithm, and the residuals were predicted by the fractional derivative grey Bernoulli model [FDGBM (p, 1)]. The other components were predicted using artificial intelligence (AI) models (least square support vector regression [LSSVR] and artificial neural network [ANN]). Finally, the fitting values of each part were added to get the predicted value of the original series.
Findings
This study found that clean energy had memory characteristics. The hybrid models EEMD–FDGBM (p, 1)–LSSVR and EEMD–FDGBM (p, 1)–ANN were significantly higher than other models in the prediction of clean energy production and consumption.
Originality/value
Consider that clean energy has complex nonlinear and memory characteristics. In this paper, the EEMD method combined the FDGBM (P, 1) and AI models to establish hybrid models to predict the consumption and output of clean energy.
Details
Keywords
Ngai Hang Chan and Wilfredo Palma
Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a…
Abstract
Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of parameter estimation procedures have been proposed. This paper gives an overview of this plethora of methodologies with special focus on likelihood-based techniques. Broadly speaking, likelihood-based techniques can be classified into the following categories: the exact maximum likelihood (ML) estimation (Sowell, 1992; Dahlhaus, 1989), ML estimates based on autoregressive approximations (Granger & Joyeux, 1980; Li & McLeod, 1986), Whittle estimates (Fox & Taqqu, 1986; Giraitis & Surgailis, 1990), Whittle estimates with autoregressive truncation (Beran, 1994a), approximate estimates based on the Durbin–Levinson algorithm (Haslett & Raftery, 1989), state-space-based maximum likelihood estimates for ARFIMA models (Chan & Palma, 1998), and estimation of stochastic volatility models (Ghysels, Harvey, & Renault, 1996; Breidt, Crato, & de Lima, 1998; Chan & Petris, 2000) among others. Given the diversified applications of these techniques in different areas, this review aims at providing a succinct survey of these methodologies as well as an overview of important related problems such as the ML estimation with missing data (Palma & Chan, 1997), influence of subsets of observations on estimates and the estimation of seasonal long-memory models (Palma & Chan, 2005). Performances and asymptotic properties of these techniques are compared and examined. Inter-connections and finite sample performances among these procedures are studied. Finally, applications to financial time series of these methodologies are discussed.