Search results
1 – 10 of 336Firano Zakaria and Anass Benbachir
One of the crucial issues in the contemporary finance is the prediction of the volatility of financial assets. In this paper, the authors are interested in modelling the…
Abstract
Purpose
One of the crucial issues in the contemporary finance is the prediction of the volatility of financial assets. In this paper, the authors are interested in modelling the stochastic volatility of the MAD/EURO and MAD/USD exchange rates.
Design/methodology/approach
For this purpose, the authors have adopted Bayesian approach based on the MCMC (Monte Carlo Markov Chain) algorithm which permits to reproduce the main stylized empirical facts of the assets studied. The data used in this study are the daily historical series of MAD/EURO and MAD/USD exchange rates covering the period from February 2, 2000, to March 3, 2017, which represent 4,456 observations.
Findings
By the aid of this approach, the authors were able to estimate all the random parameters of the stochastic volatility model which permit the prediction of the future exchange rates. The authors also have simulated the histograms, the posterior densities as well as the cumulative averages of the model parameters. The predictive efficiency of the stochastic volatility model for Morocco is capable to facilitate the management of the exchange rate in more flexible exchange regime to ensure better targeting of monetary and exchange policies.
Originality/value
To the best of the authors’ knowledge, the novelty of the paper lies in the production of a tool for predicting the evolution of the Moroccan exchange rate and also the design of a tool for the monetary authorities who are today in a proactive conception of management of the rate of exchange. Cyclical policies such as monetary policy and exchange rate policy will introduce this type of modelling into the decision-making process to achieve a better stabilization of the macroeconomic and financial framework.
Details
Keywords
Markus Neumayer, Thomas Suppan and Thomas Bretterklieber
The application of statistical inversion theory provides a powerful approach for solving estimation problems including the ability for uncertainty quantification (UQ) by means of…
Abstract
Purpose
The application of statistical inversion theory provides a powerful approach for solving estimation problems including the ability for uncertainty quantification (UQ) by means of Markov chain Monte Carlo (MCMC) methods and Monte Carlo integration. This paper aims to analyze the application of a state reduction technique within different MCMC techniques to improve the computational efficiency and the tuning process of these algorithms.
Design/methodology/approach
A reduced state representation is constructed from a general prior distribution. For sampling the Metropolis Hastings (MH) Algorithm and the Gibbs sampler are used. Efficient proposal generation techniques and techniques for conditional sampling are proposed and evaluated for an exemplary inverse problem.
Findings
For the MH-algorithm, high acceptance rates can be obtained with a simple proposal kernel. For the Gibbs sampler, an efficient technique for conditional sampling was found. The state reduction scheme stabilizes the ill-posed inverse problem, allowing a solution without a dedicated prior distribution. The state reduction is suitable to represent general material distributions.
Practical implications
The state reduction scheme and the MCMC techniques can be applied in different imaging problems. The stabilizing nature of the state reduction improves the solution of ill-posed problems. The tuning of the MCMC methods is simplified.
Originality/value
The paper presents a method to improve the solution process of inverse problems within the Bayesian framework. The stabilization of the inverse problem due to the state reduction improves the solution. The approach simplifies the tuning of MCMC methods.
Details
Keywords
Daniel Watzenig, Markus Neumayer and Colin Fox
The purpose of this paper is to establish a cheap but accurate approximation of the forward map in electrical capacitance tomography in order to approach robust real‐time…
Abstract
Purpose
The purpose of this paper is to establish a cheap but accurate approximation of the forward map in electrical capacitance tomography in order to approach robust real‐time inversion in the framework of Bayesian statistics based on Markov chain Monte Carlo (MCMC) sampling.
Design/methodology/approach
Existing formulations and methods to reduce the order of the forward model with focus on electrical tomography are reviewed and compared. In this work, the problem of fast and robust estimation of shape and position of non‐conducting inclusions in an otherwise uniform background is considered. The boundary of the inclusion is represented implicitly using an appropriate interpolation strategy based on radial basis functions. The inverse problem is formulated as Bayesian inference, with MCMC sampling used to efficiently explore the posterior distribution. An affine approximation to the forward map built over the state space is introduced to significantly reduce the reconstruction time, while maintaining spatial accuracy. It is shown that the proposed approximation is unbiased and the variance of the introduced additional model error is even smaller than the measurement error of the tomography instrumentation. Numerical examples are presented, avoiding all inverse crimes.
Findings
Provides a consistent formulation of the affine approximation with application to imaging of binary mixtures in electrical tomography using MCMC sampling with Metropolis‐Hastings‐Green dynamics.
Practical implications
The proposed cheap approximation indicates that accurate real‐time inversion of capacitance data using statistical inversion is possible.
Originality/value
The proposed approach demonstrates that a tolerably small increase in posterior uncertainty of relevant parameters, e.g. inclusion area and contour shape, is traded for a huge reduction in computing time without introducing bias in estimates. Furthermore, the proposed framework – approximated forward map combined with statistical inversion – can be applied to all kinds of soft‐field tomography problems.
Details
Keywords
Mahmoud ELsayed and Amr Soliman
The purpose of this study is to estimate the linear regression parameters using two alternative techniques. First technique is to apply the generalized linear model (GLM) and the…
Abstract
Purpose
The purpose of this study is to estimate the linear regression parameters using two alternative techniques. First technique is to apply the generalized linear model (GLM) and the second technique is the Markov Chain Monte Carlo (MCMC) method.
Design/methodology/approach
In this paper, the authors adopted the incurred claims of Egyptian non-life insurance market as a dependent variable during a 10-year period. MCMC uses Gibbs sampling to generate a sample from a posterior distribution of a linear regression to estimate the parameters of interest. However, the authors used the R package to estimate the parameters of the linear regression using the above techniques.
Findings
These procedures will guide the decision-maker for estimating the reserve and set proper investment strategy.
Originality/value
In this paper, the authors will estimate the parameters of a linear regression model using MCMC method via R package. Furthermore, MCMC uses Gibbs sampling to generate a sample from a posterior distribution of a linear regression to estimate parameters to predict future claims. In the same line, these procedures will guide the decision-maker for estimating the reserve and set proper investment strategy.
Md. Nazmul Ahsan and Jean-Marie Dufour
Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…
Abstract
Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.
Details
Keywords
The BEKK GARCH class of models presents a popular set of tools for applied analysis of dynamic conditional covariances. Within this class the analyst faces a range of model…
Abstract
The BEKK GARCH class of models presents a popular set of tools for applied analysis of dynamic conditional covariances. Within this class the analyst faces a range of model choices that trade off flexibility with parameter parsimony. In the most flexible unrestricted BEKK the parameter dimensionality increases quickly with the number of variables. Covariance targeting decreases model dimensionality but induces a set of nonlinear constraints on the underlying parameter space that are difficult to implement. Recently, the rotated BEKK (RBEKK) has been proposed whereby a targeted BEKK model is applied after the spectral decomposition of the conditional covariance matrix. An easily estimable RBEKK implies a full albeit constrained BEKK for the unrotated returns. However, the degree of the implied restrictiveness is currently unknown. In this paper, we suggest a Bayesian approach to estimation of the BEKK model with targeting based on Constrained Hamiltonian Monte Carlo (CHMC). We take advantage of suitable parallelization of the problem within CHMC utilizing the newly available computing power of multi-core CPUs and Graphical Processing Units (GPUs) that enables us to deal effectively with the inherent nonlinear constraints posed by covariance targeting in relatively high dimensions. Using parallel CHMC we perform a model comparison in terms of predictive ability of the targeted BEKK with the RBEKK in the context of an application concerning a multivariate dynamic volatility analysis of a Dow Jones Industrial returns portfolio. Although the RBEKK does improve over a diagonal BEKK restriction, it is clearly dominated by the full targeted BEKK model.
Details
Keywords
Garland Durham and John Geweke
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…
Abstract
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.
Details
Keywords
Ivan Jeliazkov and Esther Hee Lee
A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these…
Abstract
A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these probabilities involves high-dimensional integration, making simulation methods indispensable in both Bayesian and frequentist estimation and model choice. We review several existing probability estimators and then show that a broader perspective on the simulation problem can be afforded by interpreting the outcome probabilities through Bayes’ theorem, leading to the recognition that estimation can alternatively be handled by methods for marginal likelihood computation based on the output of Markov chain Monte Carlo (MCMC) algorithms. These techniques offer stand-alone approaches to simulated likelihood estimation but can also be integrated with traditional estimators. Building on both branches in the literature, we develop new methods for estimating response probabilities and propose an adaptive sampler for producing high-quality draws from multivariate truncated normal distributions. A simulation study illustrates the practical benefits and costs associated with each approach. The methods are employed to estimate the likelihood function of a correlated random effects panel data model of women's labor force participation.
Cathy W.S. Chen, Richard Gerlach and Mike K.P. So
It is well known that volatility asymmetry exists in financial markets. This paper reviews and investigates recently developed techniques for Bayesian estimation and model…
Abstract
It is well known that volatility asymmetry exists in financial markets. This paper reviews and investigates recently developed techniques for Bayesian estimation and model selection applied to a large group of modern asymmetric heteroskedastic models. These include the GJR-GARCH, threshold autoregression with GARCH errors, TGARCH, and double threshold heteroskedastic model with auxiliary threshold variables. Further, we briefly review recent methods for Bayesian model selection, such as, reversible-jump Markov chain Monte Carlo, Monte Carlo estimation via independent sampling from each model, and importance sampling methods. Seven heteroskedastic models are then compared, for three long series of daily Asian market returns, in a model selection study illustrating the preferred model selection method. Major evidence of nonlinearity in mean and volatility is found, with the preferred model having a weighted threshold variable of local and international market news.
Richard Ohene Asiedu and William Gyadu-Asiedu
This paper aims to focus on developing a baseline model for time overrun.
Abstract
Purpose
This paper aims to focus on developing a baseline model for time overrun.
Design/methodology/approach
Information on 321 completed construction projects used to assess the predictive performance of two statistical techniques, namely, multiple regression and the Bayesian approach.
Findings
The eventual results from the Bayesian Markov chain Monte Carlo model were observed to improve the predictive ability of the model compared with multiple linear regression. Besides the unique nuances peculiar with projects executed, the scope factors initial duration, gross floor area and number of storeys have been observed to be stable predictors of time overrun.
Originality/value
This current model contributes to improving the reliability of predicting time overruns.
Details