Search results
1 – 10 of over 38000Cindy S. H. Wang and Shui Ki Wan
This chapter extends the univariate forecasting method proposed by Wang, Luc, and Hsiao (2013) to forecast the multivariate long memory model subject to structural breaks. The…
Abstract
This chapter extends the univariate forecasting method proposed by Wang, Luc, and Hsiao (2013) to forecast the multivariate long memory model subject to structural breaks. The approach does not need to estimate the parameters of this multivariate system nor need to detect the structural breaks. The only procedure is to employ a VAR(k) model to approximate the multivariate long memory model subject to structural breaks. Therefore, this approach reduces the computational burden substantially and also avoids estimation of the parameters of the multivariate long memory model, which can lead to poor forecasting performance. Moreover, when there are multiple breaks, when the breaks occur close to the end of the sample or when the breaks occur at different locations for the time series in the system, our VAR approximation approach solves the issue of spurious breaks in finite samples, even though the exact orders of the multivariate long memory process are unknown. Insights from our theoretical analysis are confirmed by a set of Monte Carlo experiments, through which we demonstrate that our approach provides a substantial improvement over existing multivariate prediction methods. Finally, an empirical application to the multivariate realized volatility illustrates the usefulness of our forecasting procedure.
Details
Keywords
Chandra R. Bhat, Cristiano Varin and Nazneen Ferdous
This chapter compares the performance of the maximum simulated likelihood (MSL) approach with the composite marginal likelihood (CML) approach in multivariate ordered-response…
Abstract
This chapter compares the performance of the maximum simulated likelihood (MSL) approach with the composite marginal likelihood (CML) approach in multivariate ordered-response situations. The ability of the two approaches to recover model parameters in simulated data sets is examined, as is the efficiency of estimated parameters and computational cost. Overall, the simulation results demonstrate the ability of the CML approach to recover the parameters very well in a 5–6 dimensional ordered-response choice model context. In addition, the CML recovers parameters as well as the MSL estimation approach in the simulation contexts used in this study, while also doing so at a substantially reduced computational cost. Further, any reduction in the efficiency of the CML approach relative to the MSL approach is in the range of nonexistent to small. When taken together with its conceptual and implementation simplicity, the CML approach appears to be a promising approach for the estimation of not only the multivariate ordered-response model considered here, but also for other analytically intractable econometric models.
Bertrand Candelon, Elena-Ivona Dumitrescu, Christophe Hurlin and Franz C. Palm
In this article we propose a multivariate dynamic probit model. Our model can be viewed as a nonlinear VAR model for the latent variables associated with correlated binary…
Abstract
In this article we propose a multivariate dynamic probit model. Our model can be viewed as a nonlinear VAR model for the latent variables associated with correlated binary time-series data. To estimate it, we implement an exact maximum likelihood approach, hence providing a solution to the problem generally encountered in the formulation of multivariate probit models. Our framework allows us to study the predictive relationships among the binary processes under analysis. Finally, an empirical study of three financial crises is conducted.
Details
Keywords
The purpose of this paper is to show that multivariate t-distribution assumption provides a better description of stock return data than multivariate normality assumption.
Abstract
Purpose
The purpose of this paper is to show that multivariate t-distribution assumption provides a better description of stock return data than multivariate normality assumption.
Design/methodology/approach
The EM algorithm is applied to solve the statistical estimation problem almost analytically, and the asymptotic theory is provided for inference.
Findings
The authors find that the multivariate normality assumption is almost always rejected by real stock return data, while the multivariate t-distribution assumption can often be adequate. Conclusions under normality vs under t can be drastically different for estimating expected returns and Jensen’s αs, and for testing asset pricing models.
Practical implications
The results provide improved estimates of cost of capital and asset moment parameters that are useful for corporate project evaluation and portfolio management.
Originality/value
The authors proposed new procedures that makes it easy to use a multivariate t-distribution, which models well the data, as a simple and viable alternative in practice to examine the robustness of many existing results.
Details
Keywords
C.H. Wong, J. Nicholas and G.D. Holt
Today’s growing numbers of contractor selection methodologies reflect the increasing awareness of the construction industry for improving its procurement process and performance…
Abstract
Today’s growing numbers of contractor selection methodologies reflect the increasing awareness of the construction industry for improving its procurement process and performance. This paper investigates contractor classification methods that link clients’ selection aspirations and contractor performance. Multivariate techniques were used to study the intrinsic link between clients’ selection preferences, i.e. project‐specific criteria (PSC) and their respective levels of importance assigned (LIA), during tender evaluation for modelling contractor classification models in a data set of 68 case studies of UK construction projects. The logistic regression (LR) and multivariate discriminant analysis (MDA) were used. Results revealed that both techniques produced a good prediction on contractor performance and indicated that suitability of the equipment, past performance in cost and time on similar projects, contractor relationship with local authority, and contractor reputation/image are the most predominant PSC in the LR and MDA models among the 34 PSC. Suggests contractor classification models using multivariate techniques could be developed further.
Details
Keywords
Patricia L. Baratta and Jeffrey R. Spence
The multidimensional structure of boredom poses unique measurement challenges related to scale length and statistical modeling. We systematically address these concerns in two…
Abstract
The multidimensional structure of boredom poses unique measurement challenges related to scale length and statistical modeling. We systematically address these concerns in two studies. In Study 1, we use item response theory to shorten the 29-item Multidimensional State Boredom Scale (MSBS) (Fahlman et al., 2013). In Study 2, we use structural equation modeling to compare two theoretically consistent multidimensional structures of boredom (superordinate and multivariate) with the most commonly used, yet theoretically inconsistent, structure in boredom research (unidimensional parallel model). Our findings provide support for modeling boredom as multidimensional and demonstrate the impact of model selection on effect sizes and significance.
Details
Keywords
– The purpose of this paper is to solve the problem that the qualitative relative factors cannot be employed in traditional multivariate grey models.
Abstract
Purpose
The purpose of this paper is to solve the problem that the qualitative relative factors cannot be employed in traditional multivariate grey models.
Design/methodology/approach
First, a new model is constructed though introducing dummy drivers. Then, the parameters estimation method and recursive function of the model are discussed. Furthermore, dummy driver setting, pre and post test methods of dummy drivers are proposed. At last, the per capita income forecasting of rural residents in Henan province of China is solved with the proposed model.
Findings
The proposed model is the reasonable extension of original one. The accuracy of it is higher than former model. In the case study, the forecasting results of proposed model are compared with other grey forecasting models, and prove that proposed model has not only high accuracy, but also clear physical meaning.
Practical implications
The method proposed in the paper could be used in policy effect measure, marketing forecasting, etc., when the predictor variables are influenced by some qualitative variables.
Originality/value
It will promote the accuracy of multivariate grey forecasting model.
Details
Keywords
Ivan Jeliazkov, Jennifer Graves and Mark Kutzbach
In this paper, we consider the analysis of models for univariate and multivariate ordinal outcomes in the context of the latent variable inferential framework of Albert and Chib…
Abstract
In this paper, we consider the analysis of models for univariate and multivariate ordinal outcomes in the context of the latent variable inferential framework of Albert and Chib (1993). We review several alternative modeling and identification schemes and evaluate how each aids or hampers estimation by Markov chain Monte Carlo simulation methods. For each identification scheme we also discuss the question of model comparison by marginal likelihoods and Bayes factors. In addition, we develop a simulation-based framework for analyzing covariate effects that can provide interpretability of the results despite the nonlinearities in the model and the different identification restrictions that can be implemented. The methods are employed to analyze problems in labor economics (educational attainment), political economy (voter opinions), and health economics (consumers’ reliance on alternative sources of medical information).
Multivariate latent growth modeling (multivariate LGM) provides a flexible data analytic framework for representing and assessing cross-domain (i.e., between-constructs…
Abstract
Multivariate latent growth modeling (multivariate LGM) provides a flexible data analytic framework for representing and assessing cross-domain (i.e., between-constructs) relationships in intraindividual changes over time, which also allows incorporation of multiple levels of analysis. Using the chapter by Cortina, Pant, and Smith-Darden (this volume) as a point of departure, this chapter discusses important preliminary data analysis and interpretation issues prior to performing multivariate LGM analyses.
Burcu Tunga and Metin Demiralp
The plain High Dimensional Model Representation (HDMR) method needs Dirac delta type weights to partition the given multivariate data set for modelling an interpolation problem…
Abstract
Purpose
The plain High Dimensional Model Representation (HDMR) method needs Dirac delta type weights to partition the given multivariate data set for modelling an interpolation problem. Dirac delta type weight imposes a different importance level to each node of this set during the partitioning procedure which directly effects the performance of HDMR. The purpose of this paper is to develop a new method by using fluctuation free integration and HDMR methods to obtain optimized weight factors needed for identifying these importance levels for the multivariate data partitioning and modelling procedure.
Design/methodology/approach
A common problem in multivariate interpolation problems where the sought function values are given at the nodes of a rectangular prismatic grid is to determine an analytical structure for the function under consideration. As the multivariance of an interpolation problem increases, incompletenesses appear in standard numerical methods and memory limitations in computer‐based applications. To overcome the multivariance problems, it is better to deal with less‐variate structures. HDMR methods which are based on divide‐and‐conquer philosophy can be used for this purpose. This corresponds to multivariate data partitioning in which at most univariate components of the Plain HDMR are taken into consideration. To obtain these components there exist a number of integrals to be evaluated and the Fluctuation Free Integration method is used to obtain the results of these integrals. This new form of HDMR integrated with Fluctuation Free Integration also allows the Dirac delta type weight usage in multivariate data partitioning to be discarded and to optimize the weight factors corresponding to the importance level of each node of the given set.
Findings
The method developed in this study is applied to the six numerical examples in which there exist different structures and very encouraging results were obtained. In addition, the new method is compared with the other methods which include Dirac delta type weight function and the obtained results are given in the numerical implementations section.
Originality/value
The authors' new method allows an optimized weight structure in modelling to be determined in the given problem, instead of imposing the use of a certain weight function such as Dirac delta type weight. This allows the HDMR philosophy to have the chance of a flexible weight utilization in multivariate data modelling problems.
Details