Search results

1 – 10 of 863
Book part
Publication date: 18 January 2022

Badi H. Baltagi, Georges Bresson, Anoop Chaturvedi and Guy Lacroix

This chapter extends the work of Baltagi, Bresson, Chaturvedi, and Lacroix (2018) to the popular dynamic panel data model. The authors investigate the robustness of Bayesian panel…

Abstract

This chapter extends the work of Baltagi, Bresson, Chaturvedi, and Lacroix (2018) to the popular dynamic panel data model. The authors investigate the robustness of Bayesian panel data models to possible misspecification of the prior distribution. The proposed robust Bayesian approach departs from the standard Bayesian framework in two ways. First, the authors consider the ε-contamination class of prior distributions for the model parameters as well as for the individual effects. Second, both the base elicited priors and the ε-contamination priors use Zellner’s (1986) g-priors for the variance–covariance matrices. The authors propose a general “toolbox” for a wide range of specifications which includes the dynamic panel model with random effects, with cross-correlated effects à la Chamberlain, for the Hausman–Taylor world and for dynamic panel data models with homogeneous/heterogeneous slopes and cross-sectional dependence. Using a Monte Carlo simulation study, the authors compare the finite sample properties of the proposed estimator to those of standard classical estimators. The chapter contributes to the dynamic panel data literature by proposing a general robust Bayesian framework which encompasses the conventional frequentist specifications and their associated estimation methods as special cases.

Details

Essays in Honor of M. Hashem Pesaran: Panel Modeling, Micro Applications, and Econometric Methodology
Type: Book
ISBN: 978-1-80262-065-8

Keywords

Article
Publication date: 19 June 2020

Wing-Keung Wong

This paper aims to give a brief review on behavioral economics and behavioral finance and discusses some of the previous research on agents' utility functions, applicable risk…

3422

Abstract

Purpose

This paper aims to give a brief review on behavioral economics and behavioral finance and discusses some of the previous research on agents' utility functions, applicable risk measures, diversification strategies and portfolio optimization.

Design/methodology/approach

The authors also cover related disciplines such as trading rules, contagion and various econometric aspects.

Findings

While scholars could first develop theoretical models in behavioral economics and behavioral finance, they subsequently may develop corresponding statistical and econometric models, this finally includes simulation studies to examine whether the estimators or statistics have good power and size. This all helps us to better understand financial and economic decision-making from a descriptive standpoint.

Originality/value

The research paper is original.

Details

Studies in Economics and Finance, vol. 37 no. 4
Type: Research Article
ISSN: 1086-7376

Keywords

Content available
Book part
Publication date: 18 January 2022

Abstract

Details

Essays in Honor of M. Hashem Pesaran: Panel Modeling, Micro Applications, and Econometric Methodology
Type: Book
ISBN: 978-1-80262-065-8

Book part
Publication date: 23 November 2011

Ian M. McCarthy and Rusty Tchernis

This chapter presents a Bayesian analysis of the endogenous treatment model with misclassified treatment participation. Our estimation procedure utilizes a combination of data…

Abstract

This chapter presents a Bayesian analysis of the endogenous treatment model with misclassified treatment participation. Our estimation procedure utilizes a combination of data augmentation, Gibbs sampling, and Metropolis–Hastings to obtain estimates of the misclassification probabilities and the treatment effect. Simulations demonstrate that the proposed Bayesian estimator accurately estimates the treatment effect in light of misclassification and endogeneity.

Details

Missing Data Methods: Cross-sectional Methods and Applications
Type: Book
ISBN: 978-1-78052-525-9

Keywords

Article
Publication date: 20 October 2011

Wei Chen, Ulrich Menzefricke and Wally J. Smieliauskas

The purpose of this paper is to summarize a simulation study that analyzed the performance of Bayesian audit strategies in a novel fashion – dynamically and with varying sample…

465

Abstract

Purpose

The purpose of this paper is to summarize a simulation study that analyzed the performance of Bayesian audit strategies in a novel fashion – dynamically and with varying sample sizes depending on the extent of an auditor's prior information.

Design/methodology/approach

The prior information for the Bayesian strategies arises from a set of control tests that are evaluated making use of reliability theory. The entire audit strategy is simulated under systematically different control reliabilities and related amounts of total misstatements in an accounting population.

Findings

The major finding is that robust Bayesian audit strategies that have recently been developed in auditing research are more sensitive to non‐sampling errors than existing strategies of audit practice.

Practical implications

The authors find that there are differential effects of sampling error vs non‐sampling error on the Bayesian strategies and that controls testing does not need to be extensive to get full internal control reliance.

Originality/value

The paper adds to existing research by examining the performance of various Bayesian audit strategies under more realistic audit conditions of sampling and non‐sampling uncertainty.

Details

Grey Systems: Theory and Application, vol. 1 no. 3
Type: Research Article
ISSN: 2043-9377

Keywords

Book part
Publication date: 30 August 2019

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Content available
Book part
Publication date: 15 April 2020

Abstract

Details

Essays in Honor of Cheng Hsiao
Type: Book
ISBN: 978-1-78973-958-9

Book part
Publication date: 12 December 2003

Tiemen Woutersen

One way to control for the heterogeneity in panel data is to allow for time-invariant, individual specific parameters. This fixed effect approach introduces many parameters into…

Abstract

One way to control for the heterogeneity in panel data is to allow for time-invariant, individual specific parameters. This fixed effect approach introduces many parameters into the model which causes the “incidental parameter problem”: the maximum likelihood estimator is in general inconsistent. Woutersen (2001) shows how to approximately separate the parameters of interest from the fixed effects using a reparametrization. He then shows how a Bayesian method gives a general solution to the incidental parameter for correctly specified models. This paper extends Woutersen (2001) to misspecified models. Following White (1982), we assume that the expectation of the score of the integrated likelihood is zero at the true values of the parameters. We then derive the conditions under which a Bayesian estimator converges at rate N where N is the number of individuals. Under these conditions, we show that the variance-covariance matrix of the Bayesian estimator has the form of White (1982). We illustrate our approach by the dynamic linear model with fixed effects and a duration model with fixed effects.

Details

Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Type: Book
ISBN: 978-1-84950-253-5

Book part
Publication date: 30 August 2019

Timothy Cogley and Richard Startz

Standard estimation of ARMA models in which the AR and MA roots nearly cancel, so that individual coefficients are only weakly identified, often produces inferential ranges for…

Abstract

Standard estimation of ARMA models in which the AR and MA roots nearly cancel, so that individual coefficients are only weakly identified, often produces inferential ranges for individual coefficients that give a spurious appearance of accuracy. We remedy this problem with a model that uses a simple mixture prior. The posterior mixing probability is derived using Bayesian methods, but we show that the method works well in both Bayesian and frequentist setups. In particular, we show that our mixture procedure weights standard results heavily when given data from a well-identified ARMA model (which does not exhibit near root cancellation) and weights heavily an uninformative inferential region when given data from a weakly-identified ARMA model (with near root cancellation). When our procedure is applied to a well-identified process the investigator gets the “usual results,” so there is no important statistical cost to using our procedure. On the other hand, when our procedure is applied to a weakly identified process, the investigator learns that the data tell us little about the parameters – and is thus protected against making spurious inferences. We recommend that mixture models be computed routinely when inference about ARMA coefficients is of interest.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Content available
Book part
Publication date: 18 January 2022

Abstract

Details

Essays in Honor of M. Hashem Pesaran: Prediction and Macro Modeling
Type: Book
ISBN: 978-1-80262-062-7

1 – 10 of 863