Search results

1 – 10 of over 9000
Book part
Publication date: 21 December 2010

Chandra R. Bhat, Cristiano Varin and Nazneen Ferdous

This chapter compares the performance of the maximum simulated likelihood (MSL) approach with the composite marginal likelihood (CML) approach in multivariate ordered-response…

Abstract

This chapter compares the performance of the maximum simulated likelihood (MSL) approach with the composite marginal likelihood (CML) approach in multivariate ordered-response situations. The ability of the two approaches to recover model parameters in simulated data sets is examined, as is the efficiency of estimated parameters and computational cost. Overall, the simulation results demonstrate the ability of the CML approach to recover the parameters very well in a 5–6 dimensional ordered-response choice model context. In addition, the CML recovers parameters as well as the MSL estimation approach in the simulation contexts used in this study, while also doing so at a substantially reduced computational cost. Further, any reduction in the efficiency of the CML approach relative to the MSL approach is in the range of nonexistent to small. When taken together with its conceptual and implementation simplicity, the CML approach appears to be a promising approach for the estimation of not only the multivariate ordered-response model considered here, but also for other analytically intractable econometric models.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

Book part
Publication date: 1 January 2008

Michael K. Andersson and Sune Karlsson

We consider forecast combination and, indirectly, model selection for VAR models when there is uncertainty about which variables to include in the model in addition to the…

Abstract

We consider forecast combination and, indirectly, model selection for VAR models when there is uncertainty about which variables to include in the model in addition to the forecast variables. The key difference from traditional Bayesian variable selection is that we also allow for uncertainty regarding which endogenous variables to include in the model. That is, all models include the forecast variables, but may otherwise have differing sets of endogenous variables. This is a difficult problem to tackle with a traditional Bayesian approach. Our solution is to focus on the forecasting performance for the variables of interest and we construct model weights from the predictive likelihood of the forecast variables. The procedure is evaluated in a small simulation study and found to perform competitively in applications to real world data.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 19 November 2014

Garland Durham and John Geweke

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…

Abstract

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.

Content available
Book part
Publication date: 18 January 2022

Abstract

Details

Essays in Honor of M. Hashem Pesaran: Panel Modeling, Micro Applications, and Econometric Methodology
Type: Book
ISBN: 978-1-80262-065-8

Article
Publication date: 23 August 2019

Navajyoti Samanta

For the past two and half decades, there has been a marked shift in the corporate governance regulations around the world. The change is more remarkable in developing countries…

Abstract

Purpose

For the past two and half decades, there has been a marked shift in the corporate governance regulations around the world. The change is more remarkable in developing countries where countries with little or no corporate governance regime have adopted “world class” standards. While there can be a debate on whether law in books actually translates into law in action, in the meantime it might be interesting to analyse the law in books to understand how the corporate governance regime has evolved in the past 20 years. This paper quantitatively tracks 21 countries, most of them being developing and emerging economies, over a period of 20 years. The period covers 1995 to 2014; thus, it traverses the pre and post crisis period in 1999 and 2008. Thus, the paper also provides a snapshot of the macrolegal changes that the countries engage in hoping to stave off the next crisis. The paper uses over 50 parameters modelled on the OECD Principles of Corporate Governance. The paper confirms the suspicion that corporate governance norms around the developing economies are converging on shareholder primacy end of the continuum. The rate of convergence was highest just before the financial crisis of 2008 and has since then slowed down.

Design/methodology/approach

The paper uses data collected from experts. They filled up detailed questionnaire which quizzed them on the rules relating to corporate governance norms in their country and asked them to retrospectively check their data every five years for the past 20 years. This provided an excellent overview as to how the law has evolved in the past two decades on corporate governance. The data were then tabulated using a scoring sheet and then was put together using item response theory (IRT) which is a Bayesian method similar to factor analysis. The paper then follows a comparative approach using heatmaps to analyse the evolution of corporate governance in developing countries.

Findings

Corporate governance norms around the developing economies are converging on shareholder primacy end of the continuum. The rate of convergence was highest just before the financial crisis of 2008 and has since then slowed down.

Originality/value

This is the first time that corporate governance panel data analysis has been carried out on top developing countries across so many parameters for such a long period. This paper also uses Bayesian IRT modelling to analyse the evolution which is novel in its approach especially in the corporate governance literature. The paper thus provides a clear view on the evolution of corporate governance norms and how they are converging on a particular ideology.

Details

Corporate Governance: The International Journal of Business in Society, vol. 19 no. 5
Type: Research Article
ISSN: 1472-0701

Keywords

Book part
Publication date: 30 December 2004

Leslie W. Hepple

Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison…

Abstract

Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison and model choice, measuring the relative merits of alternative specifications and then using appropriate criteria to choose the “best” model or relative model probabilities. Bayesian theory provides a comprehensive and coherent framework for such model choice, including both nested and non-nested models within the choice set. The paper reviews the potential application of this Bayesian theory to spatial econometric models, examining the conditions and assumptions under which application is possible. Problems of prior distributions are outlined, and Bayes factors and marginal likelihoods are derived for a particular subset of spatial econometric specifications. These are then applied to two well-known spatial data-sets to illustrate the methods. Future possibilities, and comparisons with other approaches to both Bayesian and non-Bayesian model choice are discussed.

Details

Spatial and Spatiotemporal Econometrics
Type: Book
ISBN: 978-0-76231-148-4

Book part
Publication date: 19 November 2014

Esther Hee Lee

Copula modeling enables the analysis of multivariate count data that has previously required imposition of potentially undesirable correlation restrictions or has limited…

Abstract

Copula modeling enables the analysis of multivariate count data that has previously required imposition of potentially undesirable correlation restrictions or has limited attention to models with only a few outcomes. This article presents a method for analyzing correlated counts that is appealing because it retains well-known marginal distributions for each response while simultaneously allowing for flexible correlations among the outcomes. The proposed framework extends the applicability of the method to settings with high-dimensional outcomes and provides an efficient simulation method to generate the correlation matrix in a single step. Another open problem that is tackled is that of model comparison. In particular, the article presents techniques for estimating marginal likelihoods and Bayes factors in copula models. The methodology is implemented in a study of the joint behavior of four categories of US technology patents. The results reveal that patent counts exhibit high levels of correlation among categories and that joint modeling is crucial for eliciting the interactions among these variables.

Details

Bayesian Model Comparison
Type: Book
ISBN: 978-1-78441-185-5

Keywords

Book part
Publication date: 19 November 2014

Enrique Martínez-García and Mark A. Wynne

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of…

Abstract

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.

Article
Publication date: 26 December 2023

Hai Le and Phuong Nguyen

This study examines the importance of exchange rate and credit growth fluctuations when designing monetary policy in Thailand. To this end, the authors construct a small open…

Abstract

Purpose

This study examines the importance of exchange rate and credit growth fluctuations when designing monetary policy in Thailand. To this end, the authors construct a small open economy New Keynesian dynamic stochastic general equilibrium (DSGE) model. The model encompasses several essential characteristics, including incomplete financial markets, incomplete exchange rate pass-through, deviations from the law of one price and a banking sector. The authors consider generalized Taylor rules, in which policymakers adjust policy rates in response to output, inflation, credit growth and exchange rate fluctuations. The marginal likelihoods are then employed to investigate whether the central bank responds to fluctuations in the exchange rate and credit growth.

Design/methodology/approach

This study constructs a small open economy DSGE model and then estimates the model using Bayesian methods.

Findings

The authors demonstrate that the monetary authority does target exchange rates, whereas there is no evidence in favor of incorporating credit growth into the policy rules. These findings survive various robustness checks. Furthermore, the authors demonstrate that domestic shocks contribute significantly to domestic business cycles. Although the terms of trade shock plays a minor role in business cycles, it explains the most significant proportion of exchange rate fluctuations, followed by the country risk premium shock.

Originality/value

This study is the first attempt at exploring the relevance of exchange rate and credit growth fluctuations when designing monetary policy in Thailand.

Details

Journal of Economic Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0144-3585

Keywords

Book part
Publication date: 19 December 2012

R. Kelley Pace, James P. LeSage and Shuang Zhu

Most spatial econometrics work focuses on spatial dependence in the regressand or disturbances. However, Lesage and Pace (2009) as well as Pace and LeSage2009 showed that the bias…

Abstract

Most spatial econometrics work focuses on spatial dependence in the regressand or disturbances. However, Lesage and Pace (2009) as well as Pace and LeSage2009 showed that the bias in β from applying OLS to a regressand generated from a spatial autoregressive process was exacerbated by spatial dependence in the regressor. Also, the marginal likelihood function or restricted maximum likelihood (REML) function includes a determinant term involving the regressors. Therefore, high dependence in the regressor may affect the likelihood through this term. In addition, Bowden and Turkington (1984) showed that regressor temporal autocorrelation had a non-monotonic effect on instrumental variable estimators.

We provide empirical evidence that many common economic variables used as regressors (e.g., income, race, and employment) exhibit high levels of spatial dependence. Based on this observation, we conduct a Monte Carlo study of maximum likelihood (ML), REML and two instrumental variable specifications for spatial autoregressive (SAR) and spatial Durbin models (SDM) in the presence of spatially correlated regressors.

Findings indicate that as spatial dependence in the regressor rises, REML outperforms ML and that performance of the instrumental variable methods suffer. The combination of correlated regressors and the SDM specification provides a challenging environment for instrumental variable techniques.

We also examine estimates of marginal effects and show that these behave better than estimates of the underlying model parameters used to construct marginal effects estimates. Suggestions for improving design of Monte Carlo experiments are provided.

1 – 10 of over 9000