Search results
1 – 10 of 514Bertrand Candelon, Elena-Ivona Dumitrescu, Christophe Hurlin and Franz C. Palm
In this article we propose a multivariate dynamic probit model. Our model can be viewed as a nonlinear VAR model for the latent variables associated with correlated binary…
Abstract
In this article we propose a multivariate dynamic probit model. Our model can be viewed as a nonlinear VAR model for the latent variables associated with correlated binary time-series data. To estimate it, we implement an exact maximum likelihood approach, hence providing a solution to the problem generally encountered in the formulation of multivariate probit models. Our framework allows us to study the predictive relationships among the binary processes under analysis. Finally, an empirical study of three financial crises is conducted.
Details
Keywords
Lorenzo Cappellari and Stephen P. Jenkins
We analyse the dynamics of social assistance benefit (SA) receipt among working-age adults in Britain between 1991 and 2005. The decline in the annual SA receipt rate was driven…
Abstract
We analyse the dynamics of social assistance benefit (SA) receipt among working-age adults in Britain between 1991 and 2005. The decline in the annual SA receipt rate was driven by a decline in the SA entry rate rather than by the SA exit rate (which also declined). We examine the determinants of these trends using a multivariate dynamic random effects probit model of SA receipt probabilities applied to British Household Panel Survey data. We show how the model may be used to derive year-by-year predictions of aggregate SA entry, exit and receipt rates. The analysis highlights the importance of the decline in the unemployment rate over the period and other changes in the socio-economic environment including two reforms to the income maintenance system in the 1990s and also illustrates the effects of self-selection (‘creaming’) on observed and unobserved characteristics.
Details
Keywords
Kenneth Y. Chay and Dean R. Hyslop
We examine the roles of sample initial conditions and unobserved individual effects in consistent estimation of the dynamic binary response panel data model. Different…
Abstract
We examine the roles of sample initial conditions and unobserved individual effects in consistent estimation of the dynamic binary response panel data model. Different specifications of the model are estimated using female welfare and labor force participation data from the Survey of Income and Program Participation. These include alternative random effects (RE) models, in which the conditional distributions of both the unobserved heterogeneity and the initial conditions are specified, and fixed effects (FE) conditional logit models that make no assumptions on either distribution. There are several findings. First, the hypothesis that the sample initial conditions are exogenous is rejected by both samples. Misspecification of the initial conditions results in drastically overstated estimates of the state dependence and understated estimates of the short- and long-run effects of children on labor force participation. The FE conditional logit estimates are similar to the estimates from the RE model that is flexible with respect to both the initial conditions and the correlation between the unobserved heterogeneity and the covariates. For female labor force participation, there is evidence that fertility choices are correlated with both unobserved heterogeneity and pre-sample participation histories.
Details
Keywords
Recent research has found significant relationships between internet search volume and real estate markets. This paper aims to examine whether Google search volume data can serve…
Abstract
Purpose
Recent research has found significant relationships between internet search volume and real estate markets. This paper aims to examine whether Google search volume data can serve as a leading sentiment indicator and are able to predict turning points in the US housing market. One of the main objectives is to find a model based on internet search interest that generates reliable real-time forecasts.
Design/methodology/approach
Starting from seven individual real-estate-related Google search volume indices, a multivariate probit model is derived by following a selection procedure. The best model is then tested for its in- and out-of-sample forecasting ability.
Findings
The results show that the model predicts the direction of monthly price changes correctly, with over 89 per cent in-sample and just above 88 per cent in one to four-month out-of-sample forecasts. The out-of-sample tests demonstrate that although the Google model is not always accurate in terms of timing, the signals are always correct when it comes to foreseeing an upcoming turning point. Thus, as signals are generated up to six months early, it functions as a satisfactory and timely indicator of future house price changes.
Practical implications
The results suggest that Google data can serve as an early market indicator and that the application of this data set in binary forecasting models can produce useful predictions of changes in upward and downward movements of US house prices, as measured by the Case–Shiller 20-City House Price Index. This implies that real estate forecasters, economists and policymakers should consider incorporating this free and very current data set into their market forecasts or when performing plausibility checks for future investment decisions.
Originality/value
This is the first paper to apply Google search query data as a sentiment indicator in binary forecasting models to predict turning points in the housing market.
Details
Keywords
Ivan Jeliazkov, Jennifer Graves and Mark Kutzbach
In this paper, we consider the analysis of models for univariate and multivariate ordinal outcomes in the context of the latent variable inferential framework of Albert and Chib…
Abstract
In this paper, we consider the analysis of models for univariate and multivariate ordinal outcomes in the context of the latent variable inferential framework of Albert and Chib (1993). We review several alternative modeling and identification schemes and evaluate how each aids or hampers estimation by Markov chain Monte Carlo simulation methods. For each identification scheme we also discuss the question of model comparison by marginal likelihoods and Bayes factors. In addition, we develop a simulation-based framework for analyzing covariate effects that can provide interpretability of the results despite the nonlinearities in the model and the different identification restrictions that can be implemented. The methods are employed to analyze problems in labor economics (educational attainment), political economy (voter opinions), and health economics (consumers’ reliance on alternative sources of medical information).
Roman Liesenfeld, Jean-François Richard and Jan Vogler
We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and…
Abstract
We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.
Details
Keywords
Mohammad Arshad Rahman and Angela Vossmeyer
This chapter develops a framework for quantile regression in binary longitudinal data settings. A novel Markov chain Monte Carlo (MCMC) method is designed to fit the model and its…
Abstract
This chapter develops a framework for quantile regression in binary longitudinal data settings. A novel Markov chain Monte Carlo (MCMC) method is designed to fit the model and its computational efficiency is demonstrated in a simulation study. The proposed approach is flexible in that it can account for common and individual-specific parameters, as well as multivariate heterogeneity associated with several covariates. The methodology is applied to study female labor force participation and home ownership in the United States. The results offer new insights at the various quantiles, which are of interest to policymakers and researchers alike.
Details
Keywords
Juan A. Sanchis Llopis, Juan A. Mañez and Andrés Mauricio Gómez-Sánchez
This paper aims to examine the interrelation between two innovating strategies (product and process) on total factor productivity (TFP) growth and the dynamic linkages between…
Abstract
Purpose
This paper aims to examine the interrelation between two innovating strategies (product and process) on total factor productivity (TFP) growth and the dynamic linkages between these strategies, for Colombia. The authors first explore whether ex ante more productive firms are those that introduce innovations (the self-selection hypothesis) and if the introduction of innovations boosts TFP growth (the returns-to-innovation hypothesis). Second, the authors study the firm’s joint dynamic decision to implement process and/or product innovations. The authors use Colombian manufacturing data from the Annual Manufacturing and the Technological Development and Innovation Surveys.
Design/methodology/approach
This study uses a four-stage procedure. First, the authors estimate TFP using a modified version of Olley and Pakes (1996) and Levinsohn and Petrin (2003), proposed by De Loecker (2010), that implements an endogenous Markov process where past firm innovations are endogenized. This TFP would be estimated by GMM, Wooldridge (2009). Second, the authors use multivariate discrete choice models to test the self-selection hypothesis. Third, the authors explore, using multi-value treatment evaluation techniques, the life span of the impact of innovations on productivity growth (returns to innovation hypothesis). Fourth, the authors analyse the joint likelihood of implementing process and product innovations using dynamic panel data bivariate probit models.
Findings
The investigation reveals that the self-selection effect is notably more pronounced in the adoption of process innovations only, as opposed to the adoption of product innovations only or the simultaneous adoption of both process and product innovations. Moreover, our results uncover distinct temporal patterns concerning innovation returns. Specifically, process innovations yield immediate benefits, whereas implementing both product innovations only and jointly process and product innovations exhibit significant, albeit delayed, advantages. Finally, the analysis confirms the existence of dynamic interconnections between the adoption of process and product innovations.
Originality/value
The contribution of this work to the literature is manifold. First, the authors thoroughly investigate the relationship between the implementation of process and product innovations and productivity for Colombian manufacturing explicitly recognising that firms’ decisions of adopting product and process innovations are very likely interrelated. Therefore, the authors start exploring the self-selection and the returns to innovation hypotheses accounting for the fact that firms might implement process innovations only, product innovations only and both process and product innovations. In the analysis of the returns of innovation, the fact that firms may choose among a menu of three innovation strategies implies the use of evaluation methods for multi-value treatments. Second, the authors study the dynamic inter-linkages between the decisions to implement process and/or product innovations, that remains under studied, at least for emerging economies. Third, the estimation of TFP is performed using an endogenous Markov process, where past firms’ innovations are endogenized.
Details
Keywords
This study is motivated in part by the fact that the unfolding 2022 bear market, which has reached the −25% drawdown, has not been preceded by the inverted 10Y-3 m spread or an…
Abstract
Purpose
This study is motivated in part by the fact that the unfolding 2022 bear market, which has reached the −25% drawdown, has not been preceded by the inverted 10Y-3 m spread or an inverted near-term forward spread.
Design/methodology/approach
The authors develop a three-factor probit model to predict/explain the deep stock market drawdowns, which the authors define as the drawdowns in excess of 20%.
Findings
The study results show that (1) the rising credit risk predicts a deep drawdown about a year in advance and (2) the monetary policy easing precedes an imminent drawdown below the 20% threshold.
Originality/value
This study three-factor probit model shows adaptability beyond the typical recessionary bear market and predicts/explains the liquidity-based selloffs, like the 2022 and possibly the 1987 deep drawdowns.
Details