Search results1 – 10 of over 2000
After briefly reviewing the past history of Bayesian econometrics and Alan Greenspan's (2004) recent description of his use of Bayesian methods in managing policy-making…
After briefly reviewing the past history of Bayesian econometrics and Alan Greenspan's (2004) recent description of his use of Bayesian methods in managing policy-making risk, some of the issues and needs that he mentions are discussed and linked to past and present Bayesian econometric research. Then a review of some recent Bayesian econometric research and needs is presented. Finally, some thoughts are presented that relate to the future of Bayesian econometrics.
Bayesian additive regression trees (BART) is a fully Bayesian approach to modeling with ensembles of trees. BART can uncover complex regression functions with…
Bayesian additive regression trees (BART) is a fully Bayesian approach to modeling with ensembles of trees. BART can uncover complex regression functions with high-dimensional regressors in a fairly automatic way and provide Bayesian quantification of the uncertainty through the posterior. However, BART assumes independent and identical distributed (i.i.d) normal errors. This strong parametric assumption can lead to misleading inference and uncertainty quantification. In this chapter we use the classic Dirichlet process mixture (DPM) mechanism to nonparametrically model the error distribution. A key strength of BART is that default prior settings work reasonably well in a variety of problems. The challenge in extending BART is to choose the parameters of the DPM so that the strengths of the standard BART approach is not lost when the errors are close to normal, but the DPM has the ability to adapt to non-normal errors.
Open-Economy models are central to the discussion of the trade-offs monetary policy faces in an increasingly more globalized world (e.g., Marínez-García & Wynne, 2010)…
Open-Economy models are central to the discussion of the trade-offs monetary policy faces in an increasingly more globalized world (e.g., Marínez-García & Wynne, 2010), but bringing them to the data is not without its challenges. Controlling for misspecification bias, we trace the problem of uncertainty surrounding structural parameter estimation in the context of a fully specified New Open Economy Macro (NOEM) model partly to sample size. We suggest that standard macroeconomic time series with a coverage of less than forty years may not be informative enough for some parameters of interest to be recovered with precision. We also illustrate how uncertainty also arises from weak structural identification, irrespective of the sample size. This remains a concern for empirical research and we recommend estimation with simulated observations before using actual data as a way of detecting structural parameters that are prone to weak identification. We also recommend careful evaluation and documentation of the implementation strategy (specially in the selection of observables) as it can have significant effects on the strength of identification of key model parameters.
This paper seeks to develop an approach to problem localization and an algorithm to address the issue of determining the dependencies among system metrics for automated…
This paper seeks to develop an approach to problem localization and an algorithm to address the issue of determining the dependencies among system metrics for automated system management in ubiquitous computing systems.
This paper proposes an approach to problem localization for learning the knowledge of dynamic environment using probabilistic dependency analysis to automatically determine problems. This approach is based on Bayesian learning to describe a system as a hierarchical dependency network, determining root causes of problems via inductive and deductive inferences on the network. An algorithm of preprocessing is performed to create ordering parameters that have close relationships with problems.
The findings show that using ordering parameters as input of network learning, it reduces learning time and maintains accuracy in diverse domains especially in the case of including large number of parameters, hence improving efficiency and accuracy of problem localization.
An evaluation of the work is presented through performance measurements. Various comparisons and evaluations prove that the proposed approach is effective on problem localization and it can achieve significant cost savings.
This study contributes to research into the application of probabilistic dependency analysis in localizing the root cause of problems and predicting potential problems at run time after probabilities propagation throughout a network, particularly in relation to fault management in self‐managing systems.
Equilibrium job search models allow for labor markets with homogeneous workers and firms to yield nondegenerate wage densities. However, the resulting wage densities do…
Equilibrium job search models allow for labor markets with homogeneous workers and firms to yield nondegenerate wage densities. However, the resulting wage densities do not accord well with empirical regularities. Accordingly, many extensions to the basic equilibrium search model have been considered (e.g., heterogeneity in productivity, heterogeneity in the value of leisure, etc.). It is increasingly common to use nonparametric forms for these extensions and, hence, researchers can obtain a perfect fit (in a kernel smoothed sense) between theoretical and empirical wage densities. This makes it difficult to carry out model comparison of different model extensions. In this paper, we first develop Bayesian parametric and nonparametric methods which are comparable to the existing non-Bayesian literature. We then show how Bayesian methods can be used to compare various nonparametric equilibrium search models in a statistically rigorous sense.
Bayesian approaches have been widely applied in construction management (CM) research due to their capacity to deal with uncertain and complicated problems. However, to…
Bayesian approaches have been widely applied in construction management (CM) research due to their capacity to deal with uncertain and complicated problems. However, to date, there has been no systematic review of applications of Bayesian approaches in existing CM studies. This paper systematically reviews applications of Bayesian approaches in CM research and provides insights into potential benefits of this technique for driving innovation and productivity in the construction industry.
A total of 148 articles were retrieved for systematic review through two literature selection rounds.
Bayesian approaches have been widely applied to safety management and risk management. The Bayesian network (BN) was the most frequently employed Bayesian method. Elicitation from expert knowledge and case studies were the primary methods for BN development and validation, respectively. Prediction was the most popular type of reasoning with BNs. Research limitations in existing studies mainly related to not fully realizing the potential of Bayesian approaches in CM functional areas, over-reliance on expert knowledge for BN model development and lacking guides on BN model validation, together with pertinent recommendations for future research.
This systematic review contributes to providing a comprehensive understanding of the application of Bayesian approaches in CM research and highlights implications for future research and practice.
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial…
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.
The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling…
The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling and makes use of assessment across composite indicators to assess internal and external model validity (uncertainty is used in lieu of validity). Proposed methodology is generic and it is demonstrated on a well‐known data set, related to the relative position of a country in a “doing business.”
The methodology is demonstrated using data from the World Banks' “Doing Business 2008” project. A Bayesian latent variable measurement model is developed and both internal and external model uncertainties are considered.
The methodology enables the quantification of model structure uncertainty through comparisons among competing models, nested or non‐nested using both an information theoretic approach and a Bayesian approach. Furthermore, it estimates the degree of uncertainty in the rankings of alternatives.
Analyses are restricted to first‐order Bayesian measurement models.
Overall, the presented methodology contributes to a better understanding of ranking efforts providing a useful tool for those who publish rankings to gain greater insights into the nature of the distinctions they disseminate.
This chapter extends the work of Baltagi, Bresson, Chaturvedi, and Lacroix (2018) to the popular dynamic panel data model. The authors investigate the robustness of…
This chapter extends the work of Baltagi, Bresson, Chaturvedi, and Lacroix (2018) to the popular dynamic panel data model. The authors investigate the robustness of Bayesian panel data models to possible misspecification of the prior distribution. The proposed robust Bayesian approach departs from the standard Bayesian framework in two ways. First, the authors consider the ε-contamination class of prior distributions for the model parameters as well as for the individual effects. Second, both the base elicited priors and the ε-contamination priors use Zellner’s (1986) g-priors for the variance–covariance matrices. The authors propose a general “toolbox” for a wide range of specifications which includes the dynamic panel model with random effects, with cross-correlated effects à la Chamberlain, for the Hausman–Taylor world and for dynamic panel data models with homogeneous/heterogeneous slopes and cross-sectional dependence. Using a Monte Carlo simulation study, the authors compare the finite sample properties of the proposed estimator to those of standard classical estimators. The chapter contributes to the dynamic panel data literature by proposing a general robust Bayesian framework which encompasses the conventional frequentist specifications and their associated estimation methods as special cases.