Search results
1 – 10 of 59Roman Liesenfeld, Jean-François Richard and Jan Vogler
We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and…
Abstract
We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.
Details
Keywords
Kousik Guhathakurta, Basabi Bhattacharya and A. Roy Chowdhury
It has long been challenged that the distributions of empirical returns do not follow the log-normal distribution upon which many celebrated results of finance are based including…
Abstract
It has long been challenged that the distributions of empirical returns do not follow the log-normal distribution upon which many celebrated results of finance are based including the Black–Scholes Option-Pricing model. Borland (2002) succeeds in obtaining alternate closed form solutions for European options based on Tsallis distribution, which allow for statistical feedback as a model of the underlying stock returns. Motivated by this, we simulate two distinct time series based on initial data from NIFTY daily close values, one based on the Gaussian return distribution and the other on non-Gaussian distribution. Using techniques of non-linear dynamics, we examine the underlying dynamic characteristics of both the simulated time series and compare them with the characteristics of actual data. Our findings give a definite edge to the non-Gaussian model over the Gaussian one.
Details
Keywords
Naceur Naguez and Jean-Luc Prigent
Purpose – The purpose of this chapter is to estimate non-Gaussian distributions by means of Johnson distributions. An empirical illustration on hedge fund returns is…
Abstract
Purpose – The purpose of this chapter is to estimate non-Gaussian distributions by means of Johnson distributions. An empirical illustration on hedge fund returns is detailed.
Methodology/approach – To fit non-Gaussian distributions, the chapter introduces the family of Johnson distributions and its general extensions. We use both parametric and non-parametric approaches. In a first step, we analyze the serial correlation of our sample of hedge fund returns and unsmooth the series to correct the correlations. Then, we estimate the distribution by the standard Johnson system of laws. Finally, we search for a more general distribution of Johnson type, using a non-parametric approach.
Findings – We use data from the indexes Credit Suisse/Tremont Hedge Fund (CSFB/Tremont) provided by Credit Suisse. For the parametric approach, we find that the SU Johnson distribution is the most appropriate, except for the Managed Futures. For the non-parametric approach, we determine the best polynomial approximation of the function characterizing the transformation from the initial Gaussian law to the generalized Johnson distribution.
Originality/value of chapter – These findings are novel since we use an extension of the Johnson distributions to better fit non-Gaussian distributions, in particular in the case of hedge fund returns. We illustrate the power of this methodology that can be further developed in the multidimensional case.
Details
Keywords
Purpose – Time-series regression models are applied to analyse transport safety data for three purposes: (1) to develop a relationship between transport accidents (or incidents…
Abstract
Purpose – Time-series regression models are applied to analyse transport safety data for three purposes: (1) to develop a relationship between transport accidents (or incidents) and various time-varying factors, with the aim of identifying the most important factors; (2) to develop a time-series accident model in forecasting future accidents for the given values of future time-varying factors and (3) to evaluate the impact of a system-wide policy, education or engineering intervention on accident counts. Regression models for analysing transport safety data are well established, especially in analysing cross-sectional and panel datasets. There is, however, a dearth of research relating to time-series regression models in the transport safety literature. The purpose of this chapter is to examine existing literature with the aim of identifying time-series regression models that have been employed in safety analysis in relation to wider applications. The aim is to identify time-series regression models that are applicable in analysing disaggregated accident counts.
Methodology/Approach – There are two main issues in modelling time-series accident counts: (1) a flexible approach in addressing serial autocorrelation inherent in time-series processes of accident counts and (2) the fact that the conditional distribution (conditioned on past observations and covariates) of accident counts follow a Poisson-type distribution. Various time-series regression models are explored to identify the models most suitable for analysing disaggregated time-series accident datasets. A recently developed time-series regression model – the generalised linear autoregressive and moving average (GLARMA) – has been identified as the best model to analyse safety data.
Findings – The GLARMA model was applied to a time-series dataset of airproxes (aircraft proximity) that indicate airspace safety in the United Kingdom. The aim was to evaluate the impact of an airspace intervention (i.e., the introduction of reduced vertical separation minima, RVSM) on airspace safety while controlling for other factors, such as air transport movements (ATMs) and seasonality. The results indicate that the GLARMA model is more appropriate than a generalised linear model (e.g., Poisson or Poisson-Gamma), and it has been found that the introduction of RVSM has reduced the airprox events by 15%. In addition, it was found that a 1% increase in ATMs within UK airspace would lead to a 1.83% increase in monthly airproxes in UK airspace.
Practical applications – The methodology developed in this chapter is applicable to many time-series processes of accident counts. The models recommended in this chapter could be used to identify different time-varying factors and to evaluate the effectiveness of various policy and engineering interventions on transport safety or similar data (e.g., crimes).
Originality/value of paper – The GLARMA model has not been properly explored in modelling time-series safety data. This new class of model has been applied to a dataset in evaluating the effectiveness of an intervention. The model recommended in this chapter would greatly benefit researchers and analysts working with time-series data.
Details
Keywords
I review the burgeoning literature on applications of Markov regime switching models in empirical finance. In particular, distinct attention is devoted to the ability of Markov…
Abstract
I review the burgeoning literature on applications of Markov regime switching models in empirical finance. In particular, distinct attention is devoted to the ability of Markov Switching models to fit the data, filter unknown regimes and states on the basis of the data, to allow a powerful tool to test hypotheses formulated in light of financial theories, and to their forecasting performance with reference to both point and density predictions. The review covers papers concerning a multiplicity of sub-fields in financial economics, ranging from empirical analyses of stock returns, the term structure of default-free interest rates, the dynamics of exchange rates, as well as the joint process of stock and bond returns.
Details
Keywords
Anders Forslund and Ann-Sofie Kolm
A number of earlier studies have examined whether extensive labour market programmes (ALMPs) contribute to upward wage pressure in the Swedish economy. Most studies on aggregate…
Abstract
A number of earlier studies have examined whether extensive labour market programmes (ALMPs) contribute to upward wage pressure in the Swedish economy. Most studies on aggregate data have concluded that they actually do. In this paper we look at this issue using more recent data to check whether the extreme conditions in the Swedish labour market in the 1990s and the concomitant high levels of ALMP participation have brought about a change in the previously observed patterns. We also look at the issue using three different estimation methods to check the robustness of the results. Our main finding is that, according to most estimates, ALMPs do not seem to contribute significantly to an increased wage pressure.
Chi Wan and Zhijie Xiao
This paper analyzes the roles of idiosyncratic risk and firm-level conditional skewness in determining cross-sectional returns. It is shown that the traditional EGARCH estimates…
Abstract
This paper analyzes the roles of idiosyncratic risk and firm-level conditional skewness in determining cross-sectional returns. It is shown that the traditional EGARCH estimates of conditional idiosyncratic volatility may bring significant finite sample estimation bias in the presence of non-Gaussianity. We propose a new estimator that has more robust sampling performance than the EGARCH MLE in the presence of heavy-tail or skewed innovations. Our cross-sectional portfolio analysis demonstrates that the idiosyncratic volatility puzzle documented by Ang, Hodrick, Xiang, and Zhang (2006) exists intertemporally. We conduct further analysis to solve the puzzle. We show that two factors idiosyncratic variance and individual conditional skewness play important roles in determining cross-sectional returns. A new concept, the “expected windfall,” is introduced as an alternate measure of conditional return skewness. After controlling for these two additional factors, we solve the major piece of this puzzle: Our cross-sectional regression tests identify a positive relationship between conditional idiosyncratic volatility and expected returns for over 99% of the total market capitalization of the NYSE, NASDAQ, and AMEX stock exchanges.
Details
Keywords
For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic behavior, and…
Abstract
For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic behavior, and can be extended to include features such as structural instability, time-varying parameters, dynamic factors, threshold-crossing behavior, and discrete outcomes. Building upon growing evidence that the assumption of linearity may be undesirable in modeling certain macroeconomic relationships, this article seeks to add to recent advances in VAR modeling by proposing a nonparametric dynamic model for multivariate time series. In this model, the problems of modeling and estimation are approached from a hierarchical Bayesian perspective. The article considers the issues of identification, estimation, and model comparison, enabling nonparametric VAR (or NPVAR) models to be fit efficiently by Markov chain Monte Carlo (MCMC) algorithms and compared to parametric and semiparametric alternatives by marginal likelihoods and Bayes factors. Among other benefits, the methodology allows for a more careful study of structural instability while guarding against the possibility of unaccounted nonlinearity in otherwise stable economic relationships. Extensions of the proposed nonparametric model to settings with heteroskedasticity and other important modeling features are also considered. The techniques are employed to study the postwar U.S. economy, confirming the presence of distinct volatility regimes and supporting the contention that certain nonlinear relationships in the data can remain undetected by standard models.
Details