Search results

1 – 10 of over 1000
Article
Publication date: 6 March 2017

Yen-Ching Chang

The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent, accuracy and…

Abstract

Purpose

The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent, accuracy and efficiency are two inevitable considerations. The main purpose of this study is to raise the execution efficiency of the existing estimators, especially the fast maximum likelihood estimator (MLE), which has optimal accuracy.

Design/methodology/approach

A two-stage procedure combining a quicker method and a more accurate one to estimate the Hurst exponent from a large to small range will be developed. For the best possible accuracy, the data-induction method is currently ideal for the first-stage estimator and the fast MLE is the best candidate for the second-stage estimator.

Findings

For signals modeled as discrete-time fractional Gaussian noise, the proposed two-stage estimator can save up to 41.18 per cent the computational time of the fast MLE while remaining almost as accurate as the fast MLE, and even for signals modeled as discrete-time fractional Brownian motion, it can also save about 35.29 per cent except for smaller data sizes.

Originality/value

The proposed two-stage estimation procedure is a novel idea. It can be expected that other fields of parameter estimation can apply the concept of the two-stage estimation procedure to raise computational performance while remaining almost as accurate as the more accurate of two estimators.

Article
Publication date: 1 February 2006

Ahmed Hurairah, Noor Akma Ibrahim, Isa Bin Daud and Kassim Haron

Exact confidence interval estimation for the new extreme value model is often impractical. This paper seeks to evaluate the accuracy of approximate confidence intervals for the…

Abstract

Purpose

Exact confidence interval estimation for the new extreme value model is often impractical. This paper seeks to evaluate the accuracy of approximate confidence intervals for the two‐parameter new extreme value model.

Design/methodology/approach

The confidence intervals of the parameters of the new model based on likelihood ratio, Wald and Rao statistics are evaluated and compared through the simulation study. The criteria used in evaluating the confidence intervals are the attainment of the nominal error probability and the symmetry of lower and upper error probabilities.

Findings

This study substantiates the merits of the likelihood ratio, the Wald and the Rao statistics. The results indicate that the likelihood ratio‐based intervals perform much better than the Wald and Rao intervals.

Originality/value

Exact interval estimates for the new model are difficult to obtain. Consequently, large sample intervals based on the asymptotic maximum likelihood estimators have gained widespread use. Intervals based on inverting likelihood ratio, Rao and Wald statistics are rarely used in commercial packages. This paper shows that the likelihood ratio intervals are superior to intervals based on the Wald and the Rao statistics.

Details

Engineering Computations, vol. 23 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 30 August 2019

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Book part
Publication date: 15 April 2020

Joshua C. C. Chan, Chenghan Hou and Thomas Tao Yang

Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central…

Abstract

Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central limit theorem does not apply and estimates tend to be erratic even when the simulation size is large. The authors consider asymptotic trimming in such a setting. Specifically, the authors propose a bias-corrected tail-trimmed estimator such that it is consistent and has finite variance. The authors show that the proposed estimator is asymptotically normal, and has good finite-sample properties in a Monte Carlo study.

Book part
Publication date: 18 January 2022

Badi H. Baltagi, Georges Bresson, Anoop Chaturvedi and Guy Lacroix

This chapter extends the work of Baltagi, Bresson, Chaturvedi, and Lacroix (2018) to the popular dynamic panel data model. The authors investigate the robustness of Bayesian panel…

Abstract

This chapter extends the work of Baltagi, Bresson, Chaturvedi, and Lacroix (2018) to the popular dynamic panel data model. The authors investigate the robustness of Bayesian panel data models to possible misspecification of the prior distribution. The proposed robust Bayesian approach departs from the standard Bayesian framework in two ways. First, the authors consider the ε-contamination class of prior distributions for the model parameters as well as for the individual effects. Second, both the base elicited priors and the ε-contamination priors use Zellner’s (1986) g-priors for the variance–covariance matrices. The authors propose a general “toolbox” for a wide range of specifications which includes the dynamic panel model with random effects, with cross-correlated effects à la Chamberlain, for the Hausman–Taylor world and for dynamic panel data models with homogeneous/heterogeneous slopes and cross-sectional dependence. Using a Monte Carlo simulation study, the authors compare the finite sample properties of the proposed estimator to those of standard classical estimators. The chapter contributes to the dynamic panel data literature by proposing a general robust Bayesian framework which encompasses the conventional frequentist specifications and their associated estimation methods as special cases.

Details

Essays in Honor of M. Hashem Pesaran: Panel Modeling, Micro Applications, and Econometric Methodology
Type: Book
ISBN: 978-1-80262-065-8

Keywords

Book part
Publication date: 24 March 2006

Ngai Hang Chan and Wilfredo Palma

Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of…

Abstract

Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of parameter estimation procedures have been proposed. This paper gives an overview of this plethora of methodologies with special focus on likelihood-based techniques. Broadly speaking, likelihood-based techniques can be classified into the following categories: the exact maximum likelihood (ML) estimation (Sowell, 1992; Dahlhaus, 1989), ML estimates based on autoregressive approximations (Granger & Joyeux, 1980; Li & McLeod, 1986), Whittle estimates (Fox & Taqqu, 1986; Giraitis & Surgailis, 1990), Whittle estimates with autoregressive truncation (Beran, 1994a), approximate estimates based on the Durbin–Levinson algorithm (Haslett & Raftery, 1989), state-space-based maximum likelihood estimates for ARFIMA models (Chan & Palma, 1998), and estimation of stochastic volatility models (Ghysels, Harvey, & Renault, 1996; Breidt, Crato, & de Lima, 1998; Chan & Petris, 2000) among others. Given the diversified applications of these techniques in different areas, this review aims at providing a succinct survey of these methodologies as well as an overview of important related problems such as the ML estimation with missing data (Palma & Chan, 1997), influence of subsets of observations on estimates and the estimation of seasonal long-memory models (Palma & Chan, 2005). Performances and asymptotic properties of these techniques are compared and examined. Inter-connections and finite sample performances among these procedures are studied. Finally, applications to financial time series of these methodologies are discussed.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-1-84950-388-4

Book part
Publication date: 12 December 2003

Badi H. Baltagi, Georges Bresson and Alain Pirotte

In the spirit of White’s (1982) paper, this paper examines the consequences of model misspecification using a panel data regression model. Maximum likelihood, random and fixed…

Abstract

In the spirit of White’s (1982) paper, this paper examines the consequences of model misspecification using a panel data regression model. Maximum likelihood, random and fixed effects estimators are compared using Monte Carlo experiments under normality of the disturbances but with a possibly misspecified variance-covariance matrix. We show that the correct GLS (ML) procedure is always the best according to MSE performance, but the researcher does not have perfect foresight on the true form of the variance covariance matrix. In this case, we show that a pretest estimator is a viable alternative given that its performance is a close second to correct GLS (ML) whether the true specification is a two-way, a one-way error component model or a pooled regression model. Incorrect GLS, ML or fixed effects estimators may lead to a big loss in MSE.

Details

Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Type: Book
ISBN: 978-1-84950-253-5

Book part
Publication date: 6 January 2016

Lukas Koelbl, Alexander Braumann, Elisabeth Felsenstein and Manfred Deistler

This paper is concerned with estimation of the parameters of a high-frequency VAR model using mixed-frequency data, both for the stock and for the flow case. Extended Yule–Walker…

Abstract

This paper is concerned with estimation of the parameters of a high-frequency VAR model using mixed-frequency data, both for the stock and for the flow case. Extended Yule–Walker estimators and (Gaussian) maximum likelihood type estimators based on the EM algorithm are considered. Properties of these estimators are derived, partly analytically and by simulations. Finally, the loss of information due to mixed-frequency data when compared to the high-frequency situation as well as the gain of information when using mixed-frequency data relative to low-frequency data is discussed.

Article
Publication date: 14 October 2022

Fernando Antonio Moala and Karlla Delalibera Chagas

The step-stress accelerated test is the most appropriate statistical method to obtain information about the reliability of new products faster than would be possible if the…

Abstract

Purpose

The step-stress accelerated test is the most appropriate statistical method to obtain information about the reliability of new products faster than would be possible if the product was left to fail in normal use. This paper presents the multiple step-stress accelerated life test using type-II censored data and assuming a cumulative exposure model. The authors propose a Bayesian inference with the lifetimes of test item under gamma distribution. The choice of the loss function is an essential part in the Bayesian estimation problems. Therefore, the Bayesian estimators for the parameters are obtained based on different loss functions and a comparison with the usual maximum likelihood (MLE) approach is carried out. Finally, an example is presented to illustrate the proposed procedure in this paper.

Design/methodology/approach

A Bayesian inference is performed and the parameter estimators are obtained under symmetric and asymmetric loss functions. A sensitivity analysis of these Bayes and MLE estimators are presented by Monte Carlo simulation to verify if the Bayesian analysis is performed better.

Findings

The authors demonstrated that Bayesian estimators give better results than MLE with respect to MSE and bias. The authors also consider three types of loss functions and they show that the most dominant estimator that had the smallest MSE and bias is the Bayesian under general entropy loss function followed closely by the Linex loss function. In this case, the use of a symmetric loss function as the SELF is inappropriate for the SSALT mainly with small data.

Originality/value

Most of papers proposed in the literature present the estimation of SSALT through the MLE. In this paper, the authors developed a Bayesian analysis for the SSALT and discuss the procedures to obtain the Bayes estimators under symmetric and asymmetric loss functions. The choice of the loss function is an essential part in the Bayesian estimation problems.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 1 December 2016

R. Kelley Pace and James P. LeSage

We show how to quickly estimate spatial probit models for large data sets using maximum likelihood. Like Beron and Vijverberg (2004), we use the GHK (Geweke-Hajivassiliou-Keane…

Abstract

We show how to quickly estimate spatial probit models for large data sets using maximum likelihood. Like Beron and Vijverberg (2004), we use the GHK (Geweke-Hajivassiliou-Keane) algorithm to perform maximum simulated likelihood estimation. However, using the GHK for large sample sizes has been viewed as extremely difficult (Wang, Iglesias, & Wooldridge, 2013). Nonetheless, for sparse covariance and precision matrices often encountered in spatial settings, the GHK can be applied to very large sample sizes as its operation counts and memory requirements increase almost linearly with n when using sparse matrix techniques.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

1 – 10 of over 1000