Search results
21 – 30 of over 9000Roman Liesenfeld, Jean-François Richard and Jan Vogler
We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and…
Abstract
We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.
Details
Keywords
A. George Assaf and Mike Tsionas
This paper aims to focus on addressing endogeneity using instrument-free methods. The authors discuss some extensions to well-known techniques.
Abstract
Purpose
This paper aims to focus on addressing endogeneity using instrument-free methods. The authors discuss some extensions to well-known techniques.
Design/methodology/approach
This paper discusses some attractive methods to address endogeneity without the need for instruments. The methods are labeled are “harmless” in the sense that instruments are not needed and the distributional assumptions are kept to a minimum or they are replaced by more flexible semi-parametric assumptions.
Findings
Using a hospitality application, the authors provide evidence about the effectiveness of these techniques and provide directions for their implementation.
Research limitations/implications
Finding valid instruments has always been a key challenge for researchers in the field. This paper discusses and introduces methods that free researchers from the need to find instruments.
Originality/value
The paper discusses techniques that are introduced from the first time in the tourism literature.
Details
Keywords
An important but often overlooked obstacle in multivariate discrete data models is the specification of endogenous covariates. Endogeneity can be modeled as latent or observed…
Abstract
An important but often overlooked obstacle in multivariate discrete data models is the specification of endogenous covariates. Endogeneity can be modeled as latent or observed, representing competing hypotheses about the outcomes being considered. However, little attention has been applied to deciphering which specification is best supported by the data. This paper highlights the use of existing Bayesian model comparison techniques to investigate the proper specification for endogenous covariates and to understand the nature of endogeneity. Consideration of both observed and latent modeling approaches is emphasized in two empirical applications. The first application examines linkages for banking contagion and the second application evaluates the impact of education on socioeconomic outcomes.
Details
Keywords
Christopher J. O’Donnell and Vanessa Rayner
In their seminal papers on ARCH and GARCH models, Engle (1982) and Bollerslev (1986) specified parametric inequality constraints that were sufficient for non-negativity and weak…
Abstract
In their seminal papers on ARCH and GARCH models, Engle (1982) and Bollerslev (1986) specified parametric inequality constraints that were sufficient for non-negativity and weak stationarity of the estimated conditional variance function. This paper uses Bayesian methodology to impose these constraints on the parameters of an ARCH(3) and a GARCH(1,1) model. The two models are used to explain volatility in the London Metals Exchange Index. Model uncertainty is resolved using Bayesian model averaging. Results include estimated posterior pdfs for one-step-ahead conditional variance forecasts.
This chapter focuses on dispute resolution in French labor courts. We empirically investigate the forces that shape decision-making in the pretrial conciliation phase. For that…
Abstract
This chapter focuses on dispute resolution in French labor courts. We empirically investigate the forces that shape decision-making in the pretrial conciliation phase. For that purpose, we compiled a new database from legal documents. The results are twofold. First, conciliation is less likely when plaintiffs are assisted by a lawyer. Although this result might be interpreted in various ways, further analysis shows that the lawyers’ remuneration scheme is the most likely cause of this effect. Second, we find that the likelihood of settlement decreases as the amount at stake increases. These results contribute to the ongoing debate about French labor court reform.
Details
Keywords
S. Rama Krishna, J. Sathish, Talari Rahul Mani Datta and S. Raghu Vamsi
Ensuring the early detection of structural issues in aircraft is crucial for preserving human lives. One effective approach involves identifying cracks in composite structures…
Abstract
Purpose
Ensuring the early detection of structural issues in aircraft is crucial for preserving human lives. One effective approach involves identifying cracks in composite structures. This paper employs experimental modal analysis and a multi-variable Gaussian process regression method to detect and locate cracks in glass fiber composite beams.
Design/methodology/approach
The present study proposes Gaussian process regression model trained by the first three natural frequencies determined experimentally using a roving impact hammer method with crystal four-channel analyzer, uniaxial accelerometer and experimental modal analysis software. The first three natural frequencies of the cracked composite beams obtained from experimental modal analysis are used to train a multi-variable Gaussian process regression model for crack localization. Radial basis function is used as a kernel function, and hyperparameters are optimized using the negative log marginal likelihood function. Bayesian conditional probability likelihood function is used to estimate the mean and variance for crack localization in composite structures.
Findings
The efficiency of Gaussian process regression is improved in the present work with the normalization of input data. The fitted Gaussian process regression model validates with experimental modal analysis for crack localization in composite structures. The discrepancy between predicted and measured values is 1.8%, indicating strong agreement between the experimental modal analysis and Gaussian process regression methods. Compared to other recent methods in the literature, this approach significantly improves efficiency and reduces error from 18.4% to 1.8%. Gaussian process regression is an efficient machine learning algorithm for crack localization in composite structures.
Originality/value
The experimental modal analysis results are first utilized for crack localization in cracked composite structures. Additionally, the input data are normalized and employed in a machine learning algorithm, such as the multi-variable Gaussian process regression method, to efficiently determine the crack location in these structures.
Details
Keywords
Badi H. Baltagi, Georges Bresson, Anoop Chaturvedi and Guy Lacroix
This chapter extends the work of Baltagi, Bresson, Chaturvedi, and Lacroix (2018) to the popular dynamic panel data model. The authors investigate the robustness of Bayesian panel…
Abstract
This chapter extends the work of Baltagi, Bresson, Chaturvedi, and Lacroix (2018) to the popular dynamic panel data model. The authors investigate the robustness of Bayesian panel data models to possible misspecification of the prior distribution. The proposed robust Bayesian approach departs from the standard Bayesian framework in two ways. First, the authors consider the ε-contamination class of prior distributions for the model parameters as well as for the individual effects. Second, both the base elicited priors and the ε-contamination priors use Zellner’s (1986) g-priors for the variance–covariance matrices. The authors propose a general “toolbox” for a wide range of specifications which includes the dynamic panel model with random effects, with cross-correlated effects à la Chamberlain, for the Hausman–Taylor world and for dynamic panel data models with homogeneous/heterogeneous slopes and cross-sectional dependence. Using a Monte Carlo simulation study, the authors compare the finite sample properties of the proposed estimator to those of standard classical estimators. The chapter contributes to the dynamic panel data literature by proposing a general robust Bayesian framework which encompasses the conventional frequentist specifications and their associated estimation methods as special cases.
Details
Keywords
In this paper, I propose an algorithm combining adaptive sampling and Reversible Jump MCMC to deal with the problem of variable selection in time-varying linear model. These types…
Abstract
In this paper, I propose an algorithm combining adaptive sampling and Reversible Jump MCMC to deal with the problem of variable selection in time-varying linear model. These types of model arise naturally in financial application as illustrated by a motivational example. The methodology proposed here, dubbed adaptive reversible jump variable selection, differs from typical approaches by avoiding estimation of the factors and the difficulties stemming from the presence of the documented single factor bias. Illustrated by several simulated examples, the algorithm is shown to select the appropriate variables among a large set of candidates.
Details
Keywords
Beatriz Vaz de Melo Mendes and Cecília Aíube
This paper aims to statistically model the serial dependence in the first and second moments of a univariate time series using copulas, bridging the gap between theory and…
Abstract
Purpose
This paper aims to statistically model the serial dependence in the first and second moments of a univariate time series using copulas, bridging the gap between theory and applications, which are the focus of risk managers.
Design/methodology/approach
The appealing feature of the method is that it captures not just the linear form of dependence (a job usually accomplished by ARIMA linear models), but also the non‐linear ones, including tail dependence, the dependence occurring only among extreme values. In addition it investigates the changes in the mean modeling after whitening the data through the application of GARCH type filters. A total 62 US stocks are selected to illustrate the methodologies.
Findings
The copula based results corroborate empirical evidences on the existence of linear and non‐linear dependence at the mean and at the volatility levels, and contributes to practice by providing yet a simple but powerful method for capturing the dynamics in a time series. Applications may follow and include VaR calculation, simulations based derivatives pricing, and asset allocation decisions. The authors recall that the literature is still inconclusive as to the most appropriate value‐at‐risk computing approach, which seems to be a data dependent decision.
Originality/value
This paper uses a conditional copula approach for modeling the time dependence in the mean and variance of a univariate time series.
Details
Keywords
The paper aims to examine the factors that influence the turnover intention of information system (IS) personnel.
Abstract
Purpose
The paper aims to examine the factors that influence the turnover intention of information system (IS) personnel.
Design/methodology/approach
Anchored in the theory of human capital and the theory of planned behavior, as well as an extensive review of existing turnover literature, the authors propose a novel set of variables based on the three‐level analysis framework suggested by Joseph et al. to examine IS turnover intention. At the individual level, IT certifications, IT experience, and past external and internal turnover behaviors are considered. At the firm level, industry type (IT versus non‐IT firms) and IT human resource practices regarding raise and promotion are included. Finally, at the environmental level, personal concerns about external changes characterized by IT outsourcing and offshoring are studied. The authors investigate the impact of these variables on turnover intention using a large sample of 10,085 IT professionals working in the USA.
Findings
The empirical analysis based on logistic regression indicates significant associations between the variables and turnover intention.
Research limitations/implications
Future research may be directed toward developing multiple‐item measures for better validity and reliability of the study.
Practical implications
The authors derive managerial implications that may help guide firms to formulate effective human resource management and retention policies and strategies. They include the importance of organizational support for certification programs and the retention strategy based on the three phase career life cycle of IT professionals.
Originality/value
The study shows many interesting findings, some of which contrast the existing assertions. For example, the authors cannot find the inverted U‐shaped curvilinear relationship between IT experience and turnover intention shown in previous research.
Details