Books and journals Case studies Expert Briefings Open Access
Advanced search

Search results

1 – 10 of 714
To view the access options for this content please click here
Article
Publication date: 6 March 2017

Speeding up estimation of the Hurst exponent by a two-stage procedure from a large to small range

Yen-Ching Chang

The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent…

HTML
PDF (329 KB)

Abstract

Purpose

The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent, accuracy and efficiency are two inevitable considerations. The main purpose of this study is to raise the execution efficiency of the existing estimators, especially the fast maximum likelihood estimator (MLE), which has optimal accuracy.

Design/methodology/approach

A two-stage procedure combining a quicker method and a more accurate one to estimate the Hurst exponent from a large to small range will be developed. For the best possible accuracy, the data-induction method is currently ideal for the first-stage estimator and the fast MLE is the best candidate for the second-stage estimator.

Findings

For signals modeled as discrete-time fractional Gaussian noise, the proposed two-stage estimator can save up to 41.18 per cent the computational time of the fast MLE while remaining almost as accurate as the fast MLE, and even for signals modeled as discrete-time fractional Brownian motion, it can also save about 35.29 per cent except for smaller data sizes.

Originality/value

The proposed two-stage estimation procedure is a novel idea. It can be expected that other fields of parameter estimation can apply the concept of the two-stage estimation procedure to raise computational performance while remaining almost as accurate as the more accurate of two estimators.

Details

Engineering Computations, vol. 34 no. 1
Type: Research Article
DOI: https://doi.org/10.1108/EC-01-2016-0036
ISSN: 0264-4401

Keywords

  • Hurst exponent
  • Fractional Brownian motion
  • Data-induction method
  • Fast maximum likelihood estimator
  • Fractional Gaussian noise
  • Maximum likelihood estimator

To view the access options for this content please click here
Article
Publication date: 1 February 2006

Approximate confidence interval for the new extreme value distribution

Ahmed Hurairah, Noor Akma Ibrahim, Isa Bin Daud and Kassim Haron

Exact confidence interval estimation for the new extreme value model is often impractical. This paper seeks to evaluate the accuracy of approximate confidence intervals…

HTML
PDF (382 KB)

Abstract

Purpose

Exact confidence interval estimation for the new extreme value model is often impractical. This paper seeks to evaluate the accuracy of approximate confidence intervals for the two‐parameter new extreme value model.

Design/methodology/approach

The confidence intervals of the parameters of the new model based on likelihood ratio, Wald and Rao statistics are evaluated and compared through the simulation study. The criteria used in evaluating the confidence intervals are the attainment of the nominal error probability and the symmetry of lower and upper error probabilities.

Findings

This study substantiates the merits of the likelihood ratio, the Wald and the Rao statistics. The results indicate that the likelihood ratio‐based intervals perform much better than the Wald and Rao intervals.

Originality/value

Exact interval estimates for the new model are difficult to obtain. Consequently, large sample intervals based on the asymptotic maximum likelihood estimators have gained widespread use. Intervals based on inverting likelihood ratio, Rao and Wald statistics are rarely used in commercial packages. This paper shows that the likelihood ratio intervals are superior to intervals based on the Wald and the Rao statistics.

Details

Engineering Computations, vol. 23 no. 2
Type: Research Article
DOI: https://doi.org/10.1108/02644400610644513
ISSN: 0264-4401

Keywords

  • Parametric measures
  • Normal distribution
  • Log‐normal distribution
  • Simulation

To view the access options for this content please click here
Book part
Publication date: 30 August 2019

A Simple Efficient Moment-based Estimator for the Stochastic Volatility Model

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are…

HTML
PDF (421 KB)
EPUB (1.2 MB)

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
DOI: https://doi.org/10.1108/S0731-90532019000040A008
ISBN: 978-1-78973-241-2

Keywords

  • Stochastic volatility
  • latent variable
  • ARCH
  • generalized method of moments
  • quasi-maximum likelihood
  • Bayesian estimator
  • Markov Chain Monte Carlo
  • asymptotic distribution
  • Monte Carlo test
  • stock returns
  • C11
  • C13
  • C15
  • C22
  • G1

To view the access options for this content please click here
Book part
Publication date: 15 April 2020

Robust Estimation and Inference for Importance Sampling Estimators with Infinite Variance*

Joshua C. C. Chan, Chenghan Hou and Thomas Tao Yang

Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the…

HTML
PDF (913 KB)
EPUB (3.1 MB)

Abstract

Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central limit theorem does not apply and estimates tend to be erratic even when the simulation size is large. The authors consider asymptotic trimming in such a setting. Specifically, the authors propose a bias-corrected tail-trimmed estimator such that it is consistent and has finite variance. The authors show that the proposed estimator is asymptotically normal, and has good finite-sample properties in a Monte Carlo study.

Details

Essays in Honor of Cheng Hsiao
Type: Book
DOI: https://doi.org/10.1108/S0731-905320200000041008
ISBN: 978-1-78973-958-9

Keywords

  • Simulated maximum likelihood
  • bias correction
  • stochastic volatility
  • importance sampling
  • asymptotic trimming
  • Bayesian estimation
  • C11
  • C32
  • C52

To view the access options for this content please click here
Book part
Publication date: 6 January 2016

Estimation of VAR Systems from Mixed-Frequency Data: The Stock and the Flow Case

Lukas Koelbl, Alexander Braumann, Elisabeth Felsenstein and Manfred Deistler

This paper is concerned with estimation of the parameters of a high-frequency VAR model using mixed-frequency data, both for the stock and for the flow case. Extended…

HTML
PDF (459 KB)
EPUB (597 KB)

Abstract

This paper is concerned with estimation of the parameters of a high-frequency VAR model using mixed-frequency data, both for the stock and for the flow case. Extended Yule–Walker estimators and (Gaussian) maximum likelihood type estimators based on the EM algorithm are considered. Properties of these estimators are derived, partly analytically and by simulations. Finally, the loss of information due to mixed-frequency data when compared to the high-frequency situation as well as the gain of information when using mixed-frequency data relative to low-frequency data is discussed.

Details

Dynamic Factor Models
Type: Book
DOI: https://doi.org/10.1108/S0731-905320150000035002
ISBN: 978-1-78560-353-2

Keywords

  • Dynamic models
  • EM estimation method
  • extended Yule-Walker equations
  • Mixed frequency data
  • C18
  • C38

To view the access options for this content please click here
Book part
Publication date: 1 December 2016

Fast Simulated Maximum Likelihood Estimation of the Spatial Probit Model Capable of Handling Large Samples

R. Kelley Pace and James P. LeSage

We show how to quickly estimate spatial probit models for large data sets using maximum likelihood. Like Beron and Vijverberg (2004), we use the GHK…

HTML
PDF (256 KB)
EPUB (567 KB)

Abstract

We show how to quickly estimate spatial probit models for large data sets using maximum likelihood. Like Beron and Vijverberg (2004), we use the GHK (Geweke-Hajivassiliou-Keane) algorithm to perform maximum simulated likelihood estimation. However, using the GHK for large sample sizes has been viewed as extremely difficult (Wang, Iglesias, & Wooldridge, 2013). Nonetheless, for sparse covariance and precision matrices often encountered in spatial settings, the GHK can be applied to very large sample sizes as its operation counts and memory requirements increase almost linearly with n when using sparse matrix techniques.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
DOI: https://doi.org/10.1108/S0731-905320160000037008
ISBN: 978-1-78560-986-2

Keywords

  • GHK
  • truncated multivariate normal
  • spatial probit
  • sparse matrix
  • maximum simulated likelihood
  • CAR
  • C21
  • C53
  • C55
  • R30
  • R10

To view the access options for this content please click here
Book part
Publication date: 1 December 2016

Likelihood Evaluation of High-Dimensional Spatial Latent Gaussian Models with Non-Gaussian Response Variables

Roman Liesenfeld, Jean-François Richard and Jan Vogler

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian…

HTML
PDF (678 KB)
EPUB (1.1 MB)

Abstract

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
DOI: https://doi.org/10.1108/S0731-905320160000037009
ISBN: 978-1-78560-986-2

Keywords

  • Count data models
  • discrete choice models
  • firm location choice
  • importance sampling
  • Monte Carlo integration
  • spatial econometrics
  • C15
  • C21
  • C25
  • D22
  • R12

To view the access options for this content please click here
Book part
Publication date: 6 January 2016

Fast ML Estimation of Dynamic Bifactor Models: An Application to European Inflation

Gabriele Fiorentini, Alessandro Galesi and Enrique Sentana

We generalise the spectral EM algorithm for dynamic factor models in Fiorentini, Galesi, and Sentana (2014) to bifactor models with pervasive global factors complemented…

HTML
PDF (1.3 MB)
EPUB (1.8 MB)

Abstract

We generalise the spectral EM algorithm for dynamic factor models in Fiorentini, Galesi, and Sentana (2014) to bifactor models with pervasive global factors complemented by regional ones. We exploit the sparsity of the loading matrices so that researchers can estimate those models by maximum likelihood with many series from multiple regions. We also derive convenient expressions for the spectral scores and information matrix, which allows us to switch to the scoring algorithm near the optimum. We explore the ability of a model with a global factor and three regional ones to capture inflation dynamics across 25 European countries over 1999–2014.

Details

Dynamic Factor Models
Type: Book
DOI: https://doi.org/10.1108/S0731-905320150000035006
ISBN: 978-1-78560-353-2

Keywords

  • Euro area
  • inflation convergence
  • spectral maximum likelihood
  • Wiener–Kolmogorov filter
  • C32
  • C38
  • E37
  • F45

To view the access options for this content please click here
Book part
Publication date: 24 March 2006

Estimation of Long-Memory Time Series Models: a Survey of Different Likelihood-Based Methods

Ngai Hang Chan and Wilfredo Palma

Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a…

HTML
PDF (286 KB)

Abstract

Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of parameter estimation procedures have been proposed. This paper gives an overview of this plethora of methodologies with special focus on likelihood-based techniques. Broadly speaking, likelihood-based techniques can be classified into the following categories: the exact maximum likelihood (ML) estimation (Sowell, 1992; Dahlhaus, 1989), ML estimates based on autoregressive approximations (Granger & Joyeux, 1980; Li & McLeod, 1986), Whittle estimates (Fox & Taqqu, 1986; Giraitis & Surgailis, 1990), Whittle estimates with autoregressive truncation (Beran, 1994a), approximate estimates based on the Durbin–Levinson algorithm (Haslett & Raftery, 1989), state-space-based maximum likelihood estimates for ARFIMA models (Chan & Palma, 1998), and estimation of stochastic volatility models (Ghysels, Harvey, & Renault, 1996; Breidt, Crato, & de Lima, 1998; Chan & Petris, 2000) among others. Given the diversified applications of these techniques in different areas, this review aims at providing a succinct survey of these methodologies as well as an overview of important related problems such as the ML estimation with missing data (Palma & Chan, 1997), influence of subsets of observations on estimates and the estimation of seasonal long-memory models (Palma & Chan, 2005). Performances and asymptotic properties of these techniques are compared and examined. Inter-connections and finite sample performances among these procedures are studied. Finally, applications to financial time series of these methodologies are discussed.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
DOI: https://doi.org/10.1016/S0731-9053(05)20023-3
ISBN: 978-1-84950-388-4

To view the access options for this content please click here
Book part
Publication date: 6 January 2016

Analyzing International Business and Financial Cycles using Multi-Level Factor Models: A Comparison of Alternative Approaches

Breitung Jörg and Eickmeier Sandra

This paper compares alternative estimation procedures for multi-level factor models which imply blocks of zero restrictions on the associated matrix of factor loadings. We…

HTML
PDF (547 KB)
EPUB (945 KB)

Abstract

This paper compares alternative estimation procedures for multi-level factor models which imply blocks of zero restrictions on the associated matrix of factor loadings. We suggest a sequential least squares algorithm for minimizing the total sum of squared residuals and a two-step approach based on canonical correlations that are much simpler and faster than Bayesian approaches previously employed in the literature. An additional advantage is that our approaches can be used to estimate more complex multi-level factor structures where the number of levels is greater than two. Monte Carlo simulations suggest that the estimators perform well in typical sample sizes encountered in the factor analysis of macroeconomic data sets. We apply the methodologies to study international comovements of business and financial cycles.

Details

Dynamic Factor Models
Type: Book
DOI: https://doi.org/10.1108/S0731-905320150000035005
ISBN: 978-1-78560-353-2

Keywords

  • Factor models
  • canonical correlations
  • international business cycles
  • financial cycles
  • business cycle asymmetries
  • C38
  • C55

Access
Only content I have access to
Only Open Access
Year
  • Last week (4)
  • Last month (16)
  • Last 3 months (34)
  • Last 6 months (69)
  • Last 12 months (132)
  • All dates (714)
Content type
  • Article (494)
  • Book part (152)
  • Earlycite article (66)
  • Case study (2)
1 – 10 of 714
Emerald Publishing
  • Opens in new window
  • Opens in new window
  • Opens in new window
  • Opens in new window
© 2021 Emerald Publishing Limited

Services

  • Authors Opens in new window
  • Editors Opens in new window
  • Librarians Opens in new window
  • Researchers Opens in new window
  • Reviewers Opens in new window

About

  • About Emerald Opens in new window
  • Working for Emerald Opens in new window
  • Contact us Opens in new window
  • Publication sitemap

Policies and information

  • Privacy notice
  • Site policies
  • Modern Slavery Act Opens in new window
  • Chair of Trustees governance statement Opens in new window
  • COVID-19 policy Opens in new window
Manage cookies

We’re listening — tell us what you think

  • Something didn’t work…

    Report bugs here

  • All feedback is valuable

    Please share your general feedback

  • Member of Emerald Engage?

    You can join in the discussion by joining the community or logging in here.
    You can also find out more about Emerald Engage.

Join us on our journey

  • Platform update page

    Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

  • Questions & More Information

    Answers to the most commonly asked questions here