Search results
1 – 10 of 714The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent…
Abstract
Purpose
The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent, accuracy and efficiency are two inevitable considerations. The main purpose of this study is to raise the execution efficiency of the existing estimators, especially the fast maximum likelihood estimator (MLE), which has optimal accuracy.
Design/methodology/approach
A two-stage procedure combining a quicker method and a more accurate one to estimate the Hurst exponent from a large to small range will be developed. For the best possible accuracy, the data-induction method is currently ideal for the first-stage estimator and the fast MLE is the best candidate for the second-stage estimator.
Findings
For signals modeled as discrete-time fractional Gaussian noise, the proposed two-stage estimator can save up to 41.18 per cent the computational time of the fast MLE while remaining almost as accurate as the fast MLE, and even for signals modeled as discrete-time fractional Brownian motion, it can also save about 35.29 per cent except for smaller data sizes.
Originality/value
The proposed two-stage estimation procedure is a novel idea. It can be expected that other fields of parameter estimation can apply the concept of the two-stage estimation procedure to raise computational performance while remaining almost as accurate as the more accurate of two estimators.
Details
Keywords
Ahmed Hurairah, Noor Akma Ibrahim, Isa Bin Daud and Kassim Haron
Exact confidence interval estimation for the new extreme value model is often impractical. This paper seeks to evaluate the accuracy of approximate confidence intervals…
Abstract
Purpose
Exact confidence interval estimation for the new extreme value model is often impractical. This paper seeks to evaluate the accuracy of approximate confidence intervals for the two‐parameter new extreme value model.
Design/methodology/approach
The confidence intervals of the parameters of the new model based on likelihood ratio, Wald and Rao statistics are evaluated and compared through the simulation study. The criteria used in evaluating the confidence intervals are the attainment of the nominal error probability and the symmetry of lower and upper error probabilities.
Findings
This study substantiates the merits of the likelihood ratio, the Wald and the Rao statistics. The results indicate that the likelihood ratio‐based intervals perform much better than the Wald and Rao intervals.
Originality/value
Exact interval estimates for the new model are difficult to obtain. Consequently, large sample intervals based on the asymptotic maximum likelihood estimators have gained widespread use. Intervals based on inverting likelihood ratio, Rao and Wald statistics are rarely used in commercial packages. This paper shows that the likelihood ratio intervals are superior to intervals based on the Wald and the Rao statistics.
Details
Keywords
Md. Nazmul Ahsan and Jean-Marie Dufour
Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are…
Abstract
Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.
Details
Keywords
Joshua C. C. Chan, Chenghan Hou and Thomas Tao Yang
Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the…
Abstract
Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central limit theorem does not apply and estimates tend to be erratic even when the simulation size is large. The authors consider asymptotic trimming in such a setting. Specifically, the authors propose a bias-corrected tail-trimmed estimator such that it is consistent and has finite variance. The authors show that the proposed estimator is asymptotically normal, and has good finite-sample properties in a Monte Carlo study.
Details
Keywords
Lukas Koelbl, Alexander Braumann, Elisabeth Felsenstein and Manfred Deistler
This paper is concerned with estimation of the parameters of a high-frequency VAR model using mixed-frequency data, both for the stock and for the flow case. Extended…
Abstract
This paper is concerned with estimation of the parameters of a high-frequency VAR model using mixed-frequency data, both for the stock and for the flow case. Extended Yule–Walker estimators and (Gaussian) maximum likelihood type estimators based on the EM algorithm are considered. Properties of these estimators are derived, partly analytically and by simulations. Finally, the loss of information due to mixed-frequency data when compared to the high-frequency situation as well as the gain of information when using mixed-frequency data relative to low-frequency data is discussed.
Details
Keywords
R. Kelley Pace and James P. LeSage
We show how to quickly estimate spatial probit models for large data sets using maximum likelihood. Like Beron and Vijverberg (2004), we use the GHK…
Abstract
We show how to quickly estimate spatial probit models for large data sets using maximum likelihood. Like Beron and Vijverberg (2004), we use the GHK (Geweke-Hajivassiliou-Keane) algorithm to perform maximum simulated likelihood estimation. However, using the GHK for large sample sizes has been viewed as extremely difficult (Wang, Iglesias, & Wooldridge, 2013). Nonetheless, for sparse covariance and precision matrices often encountered in spatial settings, the GHK can be applied to very large sample sizes as its operation counts and memory requirements increase almost linearly with n when using sparse matrix techniques.
Details
Keywords
Roman Liesenfeld, Jean-François Richard and Jan Vogler
We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian…
Abstract
We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.
Details
Keywords
Gabriele Fiorentini, Alessandro Galesi and Enrique Sentana
We generalise the spectral EM algorithm for dynamic factor models in Fiorentini, Galesi, and Sentana (2014) to bifactor models with pervasive global factors complemented…
Abstract
We generalise the spectral EM algorithm for dynamic factor models in Fiorentini, Galesi, and Sentana (2014) to bifactor models with pervasive global factors complemented by regional ones. We exploit the sparsity of the loading matrices so that researchers can estimate those models by maximum likelihood with many series from multiple regions. We also derive convenient expressions for the spectral scores and information matrix, which allows us to switch to the scoring algorithm near the optimum. We explore the ability of a model with a global factor and three regional ones to capture inflation dynamics across 25 European countries over 1999–2014.
Details
Keywords
Ngai Hang Chan and Wilfredo Palma
Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a…
Abstract
Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of parameter estimation procedures have been proposed. This paper gives an overview of this plethora of methodologies with special focus on likelihood-based techniques. Broadly speaking, likelihood-based techniques can be classified into the following categories: the exact maximum likelihood (ML) estimation (Sowell, 1992; Dahlhaus, 1989), ML estimates based on autoregressive approximations (Granger & Joyeux, 1980; Li & McLeod, 1986), Whittle estimates (Fox & Taqqu, 1986; Giraitis & Surgailis, 1990), Whittle estimates with autoregressive truncation (Beran, 1994a), approximate estimates based on the Durbin–Levinson algorithm (Haslett & Raftery, 1989), state-space-based maximum likelihood estimates for ARFIMA models (Chan & Palma, 1998), and estimation of stochastic volatility models (Ghysels, Harvey, & Renault, 1996; Breidt, Crato, & de Lima, 1998; Chan & Petris, 2000) among others. Given the diversified applications of these techniques in different areas, this review aims at providing a succinct survey of these methodologies as well as an overview of important related problems such as the ML estimation with missing data (Palma & Chan, 1997), influence of subsets of observations on estimates and the estimation of seasonal long-memory models (Palma & Chan, 2005). Performances and asymptotic properties of these techniques are compared and examined. Inter-connections and finite sample performances among these procedures are studied. Finally, applications to financial time series of these methodologies are discussed.
Breitung Jörg and Eickmeier Sandra
This paper compares alternative estimation procedures for multi-level factor models which imply blocks of zero restrictions on the associated matrix of factor loadings. We…
Abstract
This paper compares alternative estimation procedures for multi-level factor models which imply blocks of zero restrictions on the associated matrix of factor loadings. We suggest a sequential least squares algorithm for minimizing the total sum of squared residuals and a two-step approach based on canonical correlations that are much simpler and faster than Bayesian approaches previously employed in the literature. An additional advantage is that our approaches can be used to estimate more complex multi-level factor structures where the number of levels is greater than two. Monte Carlo simulations suggest that the estimators perform well in typical sample sizes encountered in the factor analysis of macroeconomic data sets. We apply the methodologies to study international comovements of business and financial cycles.
Details