Search results
1 – 10 of 433Mahmoud ELsayed and Amr Soliman
The purpose of this study is to estimate the linear regression parameters using two alternative techniques. First technique is to apply the generalized linear model (GLM) and the…
Abstract
Purpose
The purpose of this study is to estimate the linear regression parameters using two alternative techniques. First technique is to apply the generalized linear model (GLM) and the second technique is the Markov Chain Monte Carlo (MCMC) method.
Design/methodology/approach
In this paper, the authors adopted the incurred claims of Egyptian non-life insurance market as a dependent variable during a 10-year period. MCMC uses Gibbs sampling to generate a sample from a posterior distribution of a linear regression to estimate the parameters of interest. However, the authors used the R package to estimate the parameters of the linear regression using the above techniques.
Findings
These procedures will guide the decision-maker for estimating the reserve and set proper investment strategy.
Originality/value
In this paper, the authors will estimate the parameters of a linear regression model using MCMC method via R package. Furthermore, MCMC uses Gibbs sampling to generate a sample from a posterior distribution of a linear regression to estimate parameters to predict future claims. In the same line, these procedures will guide the decision-maker for estimating the reserve and set proper investment strategy.
Ivan Jeliazkov and Esther Hee Lee
A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these…
Abstract
A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these probabilities involves high-dimensional integration, making simulation methods indispensable in both Bayesian and frequentist estimation and model choice. We review several existing probability estimators and then show that a broader perspective on the simulation problem can be afforded by interpreting the outcome probabilities through Bayes’ theorem, leading to the recognition that estimation can alternatively be handled by methods for marginal likelihood computation based on the output of Markov chain Monte Carlo (MCMC) algorithms. These techniques offer stand-alone approaches to simulated likelihood estimation but can also be integrated with traditional estimators. Building on both branches in the literature, we develop new methods for estimating response probabilities and propose an adaptive sampler for producing high-quality draws from multivariate truncated normal distributions. A simulation study illustrates the practical benefits and costs associated with each approach. The methods are employed to estimate the likelihood function of a correlated random effects panel data model of women's labor force participation.
Daniel Watzenig, Markus Neumayer and Colin Fox
The purpose of this paper is to establish a cheap but accurate approximation of the forward map in electrical capacitance tomography in order to approach robust real‐time…
Abstract
Purpose
The purpose of this paper is to establish a cheap but accurate approximation of the forward map in electrical capacitance tomography in order to approach robust real‐time inversion in the framework of Bayesian statistics based on Markov chain Monte Carlo (MCMC) sampling.
Design/methodology/approach
Existing formulations and methods to reduce the order of the forward model with focus on electrical tomography are reviewed and compared. In this work, the problem of fast and robust estimation of shape and position of non‐conducting inclusions in an otherwise uniform background is considered. The boundary of the inclusion is represented implicitly using an appropriate interpolation strategy based on radial basis functions. The inverse problem is formulated as Bayesian inference, with MCMC sampling used to efficiently explore the posterior distribution. An affine approximation to the forward map built over the state space is introduced to significantly reduce the reconstruction time, while maintaining spatial accuracy. It is shown that the proposed approximation is unbiased and the variance of the introduced additional model error is even smaller than the measurement error of the tomography instrumentation. Numerical examples are presented, avoiding all inverse crimes.
Findings
Provides a consistent formulation of the affine approximation with application to imaging of binary mixtures in electrical tomography using MCMC sampling with Metropolis‐Hastings‐Green dynamics.
Practical implications
The proposed cheap approximation indicates that accurate real‐time inversion of capacitance data using statistical inversion is possible.
Originality/value
The proposed approach demonstrates that a tolerably small increase in posterior uncertainty of relevant parameters, e.g. inclusion area and contour shape, is traded for a huge reduction in computing time without introducing bias in estimates. Furthermore, the proposed framework – approximated forward map combined with statistical inversion – can be applied to all kinds of soft‐field tomography problems.
Details
Keywords
Cathy W.S. Chen, Richard Gerlach and Mike K.P. So
It is well known that volatility asymmetry exists in financial markets. This paper reviews and investigates recently developed techniques for Bayesian estimation and model…
Abstract
It is well known that volatility asymmetry exists in financial markets. This paper reviews and investigates recently developed techniques for Bayesian estimation and model selection applied to a large group of modern asymmetric heteroskedastic models. These include the GJR-GARCH, threshold autoregression with GARCH errors, TGARCH, and double threshold heteroskedastic model with auxiliary threshold variables. Further, we briefly review recent methods for Bayesian model selection, such as, reversible-jump Markov chain Monte Carlo, Monte Carlo estimation via independent sampling from each model, and importance sampling methods. Seven heteroskedastic models are then compared, for three long series of daily Asian market returns, in a model selection study illustrating the preferred model selection method. Major evidence of nonlinearity in mean and volatility is found, with the preferred model having a weighted threshold variable of local and international market news.
Ivan Jeliazkov, Jennifer Graves and Mark Kutzbach
In this paper, we consider the analysis of models for univariate and multivariate ordinal outcomes in the context of the latent variable inferential framework of Albert and Chib…
Abstract
In this paper, we consider the analysis of models for univariate and multivariate ordinal outcomes in the context of the latent variable inferential framework of Albert and Chib (1993). We review several alternative modeling and identification schemes and evaluate how each aids or hampers estimation by Markov chain Monte Carlo simulation methods. For each identification scheme we also discuss the question of model comparison by marginal likelihoods and Bayes factors. In addition, we develop a simulation-based framework for analyzing covariate effects that can provide interpretability of the results despite the nonlinearities in the model and the different identification restrictions that can be implemented. The methods are employed to analyze problems in labor economics (educational attainment), political economy (voter opinions), and health economics (consumers’ reliance on alternative sources of medical information).
Badi H. Baltagi, Georges Bresson and Jean-Michel Etienne
This chapter proposes semiparametric estimation of the relationship between growth rate of GDP per capita, growth rates of physical and human capital, labor as well as other…
Abstract
This chapter proposes semiparametric estimation of the relationship between growth rate of GDP per capita, growth rates of physical and human capital, labor as well as other covariates and common trends for a panel of 23 OECD countries observed over the period 1971–2015. The observed differentiated behaviors by country reveal strong heterogeneity. This is the motivation behind using a mixed fixed- and random coefficients model to estimate this relationship. In particular, this chapter uses a semiparametric specification with random intercepts and slopes coefficients. Motivated by Lee and Wand (2016), the authors estimate a mean field variational Bayes semiparametric model with random coefficients for this panel of countries. Results reveal nonparametric specifications for the common trends. The use of this flexible methodology may enrich the empirical growth literature underlining a large diversity of responses across variables and countries.
Details
Keywords
Zhe Yu, Raquel Prado, Steve C. Cramer, Erin B. Quinlan and Hernando Ombao
We develop a Bayesian approach for modeling brain activation and connectivity from functional magnetic resonance image (fMRI) data. Our approach simultaneously estimates local…
Abstract
We develop a Bayesian approach for modeling brain activation and connectivity from functional magnetic resonance image (fMRI) data. Our approach simultaneously estimates local hemodynamic response functions (HRFs) and activation parameters, as well as global effective and functional connectivity parameters. Existing methods assume identical HRFs across brain regions, which may lead to erroneous conclusions in inferring activation and connectivity patterns. Our approach addresses this limitation by estimating region-specific HRFs. Additionally, it enables neuroscientists to compare effective connectivity networks for different experimental conditions. Furthermore, the use of spike and slab priors on the connectivity parameters allows us to directly select significant effective connectivities in a given network.
We include a simulation study that demonstrates that, compared to the standard generalized linear model (GLM) approach, our model generally has higher power and lower type I error and bias than the GLM approach, and it also has the ability to capture condition-specific connectivities. We applied our approach to a dataset from a stroke study and found different effective connectivity patterns for task and rest conditions in certain brain regions of interest (ROIs).
Details
Keywords
Mohd Irfan and Anup Kumar Sharma
A progressive hybrid censoring scheme (PHCS) becomes impractical for ensuring dependable outcomes when there is a low likelihood of encountering a small number of failures prior…
Abstract
Purpose
A progressive hybrid censoring scheme (PHCS) becomes impractical for ensuring dependable outcomes when there is a low likelihood of encountering a small number of failures prior to the predetermined terminal time T. The generalized progressive hybrid censoring scheme (GPHCS) efficiently addresses to overcome the limitation of the PHCS.
Design/methodology/approach
In this article, estimation of model parameter, survival and hazard rate of the Unit-Lindley distribution (ULD), when sample comes from the GPHCS, have been taken into account. The maximum likelihood estimator has been derived using Newton–Raphson iterative procedures. Approximate confidence intervals of the model parameter and their arbitrary functions are established by the Fisher information matrix. Bayesian estimation procedures have been derived using Metropolis–Hastings algorithm under squared error loss function. Convergence of Markov chain Monte Carlo (MCMC) samples has been examined. Various optimality criteria have been considered. An extensive Monte Carlo simulation analysis has been shown to compare and validating of the proposed estimation techniques.
Findings
The Bayesian MCMC approach to estimate the model parameters and reliability characteristics of the generalized progressive hybrid censored data of ULD is recommended. The authors anticipate that health data analysts and reliability professionals will get benefit from the findings and approaches presented in this study.
Originality/value
The ULD has a broad range of practical utility, making it a problem to estimate the model parameters as well as reliability characteristics and the significance of the GPHCS also encourage the authors to consider the present estimation problem because it has not previously been discussed in the literature.
Details
Keywords
Leonidas A. Zampetakis and Vassilis S. Moustakis
The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling and…
Abstract
Purpose
The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling and makes use of assessment across composite indicators to assess internal and external model validity (uncertainty is used in lieu of validity). Proposed methodology is generic and it is demonstrated on a well‐known data set, related to the relative position of a country in a “doing business.”
Design/methodology/approach
The methodology is demonstrated using data from the World Banks' “Doing Business 2008” project. A Bayesian latent variable measurement model is developed and both internal and external model uncertainties are considered.
Findings
The methodology enables the quantification of model structure uncertainty through comparisons among competing models, nested or non‐nested using both an information theoretic approach and a Bayesian approach. Furthermore, it estimates the degree of uncertainty in the rankings of alternatives.
Research limitations/implications
Analyses are restricted to first‐order Bayesian measurement models.
Originality/value
Overall, the presented methodology contributes to a better understanding of ranking efforts providing a useful tool for those who publish rankings to gain greater insights into the nature of the distinctions they disseminate.
Details
Keywords
Garrison N. Stevens, Sez Atamturktur, D. Andrew Brown, Brian J. Williams and Cetin Unal
Partitioned analysis is an increasingly popular approach for modeling complex systems with behaviors governed by multiple, interdependent physical phenomena. Yielding accurate…
Abstract
Purpose
Partitioned analysis is an increasingly popular approach for modeling complex systems with behaviors governed by multiple, interdependent physical phenomena. Yielding accurate representations of reality from partitioned models depends on the availability of all necessary constituent models representing relevant physical phenomena. However, there are many engineering problems where one or more of the constituents may be unavailable because of lack of knowledge regarding the underlying principles governing the behavior or the inability to experimentally observe the constituent behavior in an isolated manner through separate-effect experiments. This study aims to enable partitioned analysis in such situations with an incomplete representation of the full system by inferring the behavior of the missing constituent.
Design/methodology/approach
This paper presents a statistical method for inverse analysis infer missing constituent physics. The feasibility of the method is demonstrated using a physics-based visco-plastic self-consistent (VPSC) model that represents the mechanics of slip and twinning behavior in 5182 aluminum alloy. However, a constituent model to carry out thermal analysis representing the dependence of hardening parameters on temperature is unavailable. Using integral-effect experimental data, the proposed approach is used to infer an empirical constituent model, which is then coupled with VPSC to obtain an experimentally augmented partitioned model representing the thermo-mechanical properties of 5182 aluminum alloy.
Findings
Results demonstrate the capability of the method to enable model predictions dependent upon relevant operational conditions. The VPSC model is coupled with the empirical constituent, and the newly enabled thermal-dependent predictions are compared with experimental data.
Originality/value
The method developed in this paper enables the empirical inference of a functional representation of input parameter values in lieu of a missing constituent model. Through this approach, development of partitioned models in the presence of uncertainty regarding a constituent model is made possible.
Details