Search results
1 – 10 of over 4000Lee C. Adkins, Randall C. Campbell, Viera Chmelarova and R. Carter Hill
The Hausman test is used in applied economic work as a test of misspecification. It is most commonly thought of as a test of whether one or more explanatory variables in a…
Abstract
The Hausman test is used in applied economic work as a test of misspecification. It is most commonly thought of as a test of whether one or more explanatory variables in a regression model are endogenous. The usual Hausman contrast test requires one estimator to be efficient under the null hypothesis. If data are heteroskedastic, the least squares estimator is no longer efficient. The first option is to estimate the covariance matrix of the difference of the contrasted estimators, as suggested by Hahn, Ham, and Moon (2011). Other options for carrying out a Hausman-like test in this case include estimating an artificial regression and using robust standard errors. Alternatively, we might seek additional power by estimating the artificial regression using feasible generalized least squares. Finally, we might stack moment conditions leading to the two estimators and estimate the resulting system by GMM. We examine these options in a Monte Carlo experiment. We conclude that the test based on the procedure by Hahn, Ham, and Moon has good properties. The generalized least squares-based tests have higher size-corrected power when heteroskedasticity is detected in the DWH regression, and the heteroskedasticity is associated with a strong external IV. We do not consider the properties of the implied pretest estimator.
Details
Keywords
Tran Liem, Marc Gaudry, Marcel Dagenais and Ulrich Blum
In the finance literature, fitting a cross-sectional regression with (estimated) abnormal returns as the dependent variable and firm-specific variables (e.g. financial ratios) as…
Abstract
Purpose
In the finance literature, fitting a cross-sectional regression with (estimated) abnormal returns as the dependent variable and firm-specific variables (e.g. financial ratios) as independent variables has become de rigueur for a publishable event study. In the absence of skewness and/or kurtosis the explanatory variable, the regression design does not exhibit leverage – an issue that has been addressed in the econometrics literature on the finite sample properties of heteroskedastic-consistent (HC) standard errors, but not in the finance literature on event studies. The paper aims to discuss this issue.
Design/methodology/approach
In this paper, simulations are designed to evaluate the potential bias in the standard error of the regression coefficient when the regression design includes “points of high leverage” (Chesher and Jewitt, 1987) and heteroskedasticity. The empirical distributions of test statistics are tabulated from ordinary least squares, weighted least squares, and HC standard errors.
Findings
None of the test statistics examined in these simulations are uniformly robust with regard to conditional heteroskedasticity when the regression includes “points of high leverage.” In some cases the bias can be quite large: an empirical rejection rate as high as 25 percent for a 5 percent nominal significance level. Further, the bias in OLS HC standard errors may be attenuated but not fully corrected with a “wild bootstrap.”
Research limitations/implications
If the researcher suspects an event-induced increase in return variances, tests for conditional heteroskedasticity should be conducted and the regressor matrix should be evaluated for observations that exhibit a high degree of leverage.
Originality/value
This paper is a modest step toward filling a gap on the finite sample properties of HC standard errors in the event methodology literature.
Details
Keywords
Gregory Koutmos and Panayiotis Theodossiou
Several authors have raised the issue of non‐stationarity of security returns in empirical tests of the Arbitrage Pricing Theory (APT). This paper tests for one form of…
Abstract
Several authors have raised the issue of non‐stationarity of security returns in empirical tests of the Arbitrage Pricing Theory (APT). This paper tests for one form of non‐stationarity, namely, conditional heteroskedasticity, in the empirical APT with observed factors. Using monthly stock returns for the period 1970 to 1988, this paper shows that conditional heteroskedasticity is a pervasive phenomenon leading to inefficient estimates of factor betas. Ignoring the problem may produce erroneous conclusions as to which risk factors require a premium. Furthermore, grouping individual securities into portfolios does not appear to diminish the presence of conditional heteroskedasticity.
Shahan Akhtar and Naimat U. Khan
The current paper aims to fill a gap in the literature by analyzing the nature of volatility on the Karachi Stock Exchange (KSE) 100 index of the KSE, and develop an understanding…
Abstract
Purpose
The current paper aims to fill a gap in the literature by analyzing the nature of volatility on the Karachi Stock Exchange (KSE) 100 index of the KSE, and develop an understanding as to which model is most suitable for measuring volatility among those used. The study contributes significantly to the literature as, compared with the limited previous studies of Pakistan undertaken in the past, it covers three types of data (i.e. daily, weekly and monthly) for the whole period from the introduction of the KSE 100 index on November 2, 1991 to December 31, 2013. In addition, to analyze the impact of global financial crises upon volatility, the data have been divided into pre-crisis (1991-2007) and post-crisis (2008-2013) periods.
Design/methodology/approach
This study has used an advanced set of volatility models such as autoregressive conditional heteroskedasticity [ARCH (1)], generalized autoregressive conditional heteroskedasticity [GARCH (1, 1)], GARCH in mean [GARCH-M (1, 1)], exponential GARCH [E-GARCH (1, 1)], threshold GARCH [T-GARCH (1, 1)], power GARCH [P-GARCH (1, 1)] and also a simple exponentially weighted moving average (EWMA) model.
Findings
The results reveal that daily, weekly and monthly return series show non-normal distribution, stationarity and volatility clustering. However, the heteroskedasticity is absent only in the monthly returns making only the EWMA model usable to measure the volatility level in the monthly series. The P-GARCH (1, 1) model proved to be a better model for modeling volatility in the case of daily returns, while the GARCH (1, 1) model proved to be the most appropriate for weekly data based on the Schwarz information criterion (SIC) and log likelihood (LL) functionality. The study shows high persistence of volatility, a mean reverting process and an absence of a risk premium in the KSE market with an insignificant leverage effect only in the case of weekly returns. However, a significant leverage effect is reported regarding the daily series of the KSE 100 index. In addition, to analyze the impact of global financial crises upon volatility, the findings show that the subperiods demonstrated a slightly low volatility and the global economic crisis did not cause a rise in volatility levels.
Originality/value
Previously, the literature about volatility modeling in Pakistan’s markets has been limited to a few models of relatively small sample size. The current thesis has attempted to overcome these limitations and used diverse models for three types of data series (daily, weekly and monthly). In addition, the Pakistani economy has been beset by turmoil throughout its history, experiencing a range of shocks from the mild to the extreme. This paper has measured the impact of those shocks upon the volatility levels of the KSE.
Details
Keywords
Ziwen Gao, Steven F. Lehrer, Tian Xie and Xinyu Zhang
Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and…
Abstract
Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and heteroskedasticity of unknown form. The theoretical investigation establishes the asymptotic optimality of the proposed heteroskedastic model averaging heterogeneous autoregressive (H-MAHAR) estimator under mild conditions. The authors additionally examine the convergence rate of the estimated weights of the proposed H-MAHAR estimator. This analysis sheds new light on the asymptotic properties of the least squares model averaging estimator under alternative complicated data generating processes (DGPs). To examine the performance of the H-MAHAR estimator, the authors conduct an out-of-sample forecasting application involving 22 different cryptocurrency assets. The results emphasize the importance of accounting for both model uncertainty and heteroskedasticity in practice.
Details
Keywords
Identification of shocks of interest is a central problem in structural vector autoregressive (SVAR) modeling. Identification is often achieved by imposing restrictions on the…
Abstract
Identification of shocks of interest is a central problem in structural vector autoregressive (SVAR) modeling. Identification is often achieved by imposing restrictions on the impact or long-run effects of shocks or by considering sign restrictions for the impulse responses. In a number of articles changes in the volatility of the shocks have also been used for identification. The present study focuses on the latter device. Some possible setups for identification via heteroskedasticity are reviewed and their potential and limitations are discussed. Two detailed examples are considered to illustrate the approach.
Details
Keywords
Turvey (2007, Physica A) introduced a scaled variance ratio procedure for testing the random walk hypothesis (RWH) for financial time series by estimating Hurst coefficients for a…
Abstract
Purpose
Turvey (2007, Physica A) introduced a scaled variance ratio procedure for testing the random walk hypothesis (RWH) for financial time series by estimating Hurst coefficients for a fractional Brownian motion model of asset prices. The purpose of this paper is to extend his work by making the estimation procedure robust to heteroskedasticity and by addressing the multiple hypothesis testing problem.
Design/methodology/approach
Unbiased, heteroskedasticity consistent, variance ratio estimates are calculated for end of day price data for eight time lags over 12 agricultural commodity futures (front month) and 40 US equities from 2000-2014. A bootstrapped stepdown procedure is used to obtain appropriate statistical confidence for the multiplicity of hypothesis tests. The variance ratio approach is compared against regression-based testing for fractionality.
Findings
Failing to account for bias, heteroskedasticity, and multiplicity of testing can lead to large numbers of erroneous rejections of the null hypothesis of efficient markets following an independent random walk. Even with these adjustments, a few futures contracts significantly violate independence for short lags at the 99 percent level, and a number of equities/lags violate independence at the 95 percent level. When testing at the asset level, futures prices are found not to contain fractional properties, while some equities do.
Research limitations/implications
Only a subsample of futures and equities, and only a limited number of lags, are evaluated. It is possible that multiplicity adjustments for larger numbers of tests would result in fewer rejections of independence.
Originality/value
This paper provides empirical evidence that violations of the RWH for financial time series are likely to exist, but are perhaps less common than previously thought.
Details
Keywords
John C. Chao, Jerry A. Hausman, Whitney K. Newey, Norman R. Swanson and Tiemen Woutersen
This chapter shows how a weighted average of a forward and reverse Jackknife IV estimator (JIVE) yields estimators that are robust against heteroscedasticity and many instruments…
Abstract
This chapter shows how a weighted average of a forward and reverse Jackknife IV estimator (JIVE) yields estimators that are robust against heteroscedasticity and many instruments. These estimators, called HFUL (Heteroscedasticity robust Fuller) and HLIM (Heteroskedasticity robust limited information maximum likelihood (LIML)) were introduced by Hausman, Newey, Woutersen, Chao, and Swanson (2012), but without derivation. Combining consistent estimators is a theme that is associated with Jerry Hausman and, therefore, we present this derivation in this volume. Additionally, and in order to further understand and interpret HFUL and HLIM in the context of jackknife type variance ratio estimators, we show that a new variant of HLIM, under specific grouped data settings with dummy instruments, simplifies to the Bekker and van der Ploeg (2005) MM (method of moments) estimator.
Details