Search results
1 – 10 of 121Joshua C. C. Chan, Chenghan Hou and Thomas Tao Yang
Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central…
Abstract
Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central limit theorem does not apply and estimates tend to be erratic even when the simulation size is large. The authors consider asymptotic trimming in such a setting. Specifically, the authors propose a bias-corrected tail-trimmed estimator such that it is consistent and has finite variance. The authors show that the proposed estimator is asymptotically normal, and has good finite-sample properties in a Monte Carlo study.
Details
Keywords
Xiaohu Wang, Weilin Xiao and Jun Yu
This chapter derives asymptotic properties of the least squares (LS) estimator of the autoregressive (AR) parameter in local to unity processes with errors being fractional…
Abstract
This chapter derives asymptotic properties of the least squares (LS) estimator of the autoregressive (AR) parameter in local to unity processes with errors being fractional Gaussian noise (FGN) with the Hurst parameter
Details
Keywords
Kohtaro Hitomi, Keiji Nagai, Yoshihiko Nishiyama and Junfan Tao
In this study, the authors investigate methods of sequential analysis to test prospectively for the existence of a unit root against stationary or explosive states in a p-th order…
Abstract
In this study, the authors investigate methods of sequential analysis to test prospectively for the existence of a unit root against stationary or explosive states in a p-th order autoregressive (AR) process monitored over time. Our sequential sampling schemes use stopping times based on the observed Fisher information of a local-to-unity parameter. In contrast to the Dickey–Fuller (DF) test statistic, the sequential test statistic has asymptotic normality. The authors derive the joint limit of the test statistic and the stopping time, which can be characterized using a 3/2-dimensional Bessel process driven by a time-changed Brownian motion. The authors obtain their limiting joint Laplace transform and density function under the null and local alternatives. In addition, simulations are conducted to show that the theoretical results are valid.
Details
Keywords
Debajit Dutta, Subhra Sankar Dhar and Amit Mitra
Stochastic volatility models are of great importance in the field of mathematical finance, especially for accurately explaining the dynamics of financial derivatives. A…
Abstract
Stochastic volatility models are of great importance in the field of mathematical finance, especially for accurately explaining the dynamics of financial derivatives. A quantile-based estimator for the location parameter of a stochastic volatility model is proposed by solving an optimization problem. In this chapter, the asymptotic distribution of the estimator is derived without assuming that the density function of the noise is positive around the corresponding population quantile. We also discuss a Bayesian approach to the quantile estimation problem and establish a result regarding the nature of the posterior distribution.
Details
Keywords
Garland Durham and John Geweke
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…
Abstract
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.
Details
Keywords
Igor Vaynman and Brendan K. Beare
The variance targeting estimator (VTE) for generalized autoregressive conditionally heteroskedastic (GARCH) processes has been proposed as a computationally simpler and…
Abstract
The variance targeting estimator (VTE) for generalized autoregressive conditionally heteroskedastic (GARCH) processes has been proposed as a computationally simpler and misspecification-robust alternative to the quasi-maximum likelihood estimator (QMLE). In this paper we investigate the asymptotic behavior of the VTE when the stationary distribution of the GARCH process has infinite fourth moment. Existing studies of historical asset returns indicate that this may be a case of empirical relevance. Under suitable technical conditions, we establish a stable limit theory for the VTE, with the rate of convergence determined by the tails of the stationary distribution. This rate is slower than that achieved by the QMLE. The limit distribution of the VTE is nondegenerate but singular. We investigate the use of subsampling techniques for inference, but find that finite sample performance is poor in empirically relevant scenarios.
Details
Keywords
Federico Echenique and Ivana Komunjer
In this article we design an econometric test for monotone comparative statics (MCS) often found in models with multiple equilibria. Our test exploits the observable implications…
Abstract
In this article we design an econometric test for monotone comparative statics (MCS) often found in models with multiple equilibria. Our test exploits the observable implications of the MCS prediction: that the extreme (high and low) conditiona l quantiles of the dependent variable increase monotonically with the explanatory variable. The main contribution of the article is to derive a likelihood-ratio test, which to the best of our knowledge is the first econometric test of MCS proposed in the literature. The test is an asymptotic “chi-bar squared” test for order restrictions on intermediate conditional quantiles. The key features of our approach are: (1) we do not need to estimate the underlying nonparametric model relating the dependent and explanatory variables to the latent disturbances; (2) we make few assumptions on the cardinality, location, or probabilities over equilibria. In particular, one can implement our test without assuming an equilibrium selection rule.
Details
Keywords
In the two seminal papers Anderson and Hsiao (1981, 1982), the linear panel regression model without cross-sectional correlation is thoroughly discussed. This uncorrelatedness…
Abstract
In the two seminal papers Anderson and Hsiao (1981, 1982), the linear panel regression model without cross-sectional correlation is thoroughly discussed. This uncorrelatedness assumption is now often examined in empirical work, using tests such as those by Pesaran, Ullah, and Yamagata (2008), Hsiao, Pesaran, and Pick (2012), or Pesaran (2015). All these tests in turn improve upon the so-called error-components test suggested in Breusch and Pagan (1980). In this chapter, the author revisits this error-components test and derives its asymptotic distribution under various scenarios: (a) both time-series dimension T and cross-sectional dimension N go to ∞ jointly (Phillips & Moon, 1999); (b) T → ∞ while N is fixed, and (c) N → ∞ while T is fixed. To the best of the author’s knowledge, the results under Scenarios (b) and (c) are new. Moreover, while the distributions under (a) and (b) are normal, that under (c) is not and it is even asymmetric. The critical values under (c) can be simulated. A Monte Carlo experiment is performed and it aims to throw light on the choice among the critical values suggested in the three scenarios, given a T and an N.
Details
Keywords
Ngai Hang Chan and Wilfredo Palma
Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of…
Abstract
Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of parameter estimation procedures have been proposed. This paper gives an overview of this plethora of methodologies with special focus on likelihood-based techniques. Broadly speaking, likelihood-based techniques can be classified into the following categories: the exact maximum likelihood (ML) estimation (Sowell, 1992; Dahlhaus, 1989), ML estimates based on autoregressive approximations (Granger & Joyeux, 1980; Li & McLeod, 1986), Whittle estimates (Fox & Taqqu, 1986; Giraitis & Surgailis, 1990), Whittle estimates with autoregressive truncation (Beran, 1994a), approximate estimates based on the Durbin–Levinson algorithm (Haslett & Raftery, 1989), state-space-based maximum likelihood estimates for ARFIMA models (Chan & Palma, 1998), and estimation of stochastic volatility models (Ghysels, Harvey, & Renault, 1996; Breidt, Crato, & de Lima, 1998; Chan & Petris, 2000) among others. Given the diversified applications of these techniques in different areas, this review aims at providing a succinct survey of these methodologies as well as an overview of important related problems such as the ML estimation with missing data (Palma & Chan, 1997), influence of subsets of observations on estimates and the estimation of seasonal long-memory models (Palma & Chan, 2005). Performances and asymptotic properties of these techniques are compared and examined. Inter-connections and finite sample performances among these procedures are studied. Finally, applications to financial time series of these methodologies are discussed.