Search results
1 – 10 of over 90000Michiel de Pooter, Francesco Ravazzolo, Rene Segers and Herman K. van Dijk
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior…
Abstract
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.
Junan Ji, Zhigang Zhao, Shi Zhang and Tianyuan Chen
This paper aims to propose an energetic model parameter calculation method for predicting the materials’ symmetrical static hysteresis loop and asymmetrical minor loop to improve…
Abstract
Purpose
This paper aims to propose an energetic model parameter calculation method for predicting the materials’ symmetrical static hysteresis loop and asymmetrical minor loop to improve the accuracy of electromagnetic analysis of equipment.
Design/methodology/approach
For predicting the symmetrical static hysteresis loop, this paper deduces the functional relationship between magnetic flux density and energetic model parameters based on the materials’ magnetization mechanism. It realizes the efficient and accurate symmetrical static hysteresis loop prediction under different magnetizations. For predicting the asymmetrical minor loop, a new algorithm is proposed that updates the energetic model parameters of the asymmetrical minor loop to consider the return-point memory effect.
Findings
The comparison of simulation and experimental results verifies that the proposed parameters calculation method has high accuracy and strong universality.
Originality/value
The proposed parameter calculation method improves the existing parameter calculation method’s problem of relying on too much experimental data and inaccuracy. Consequently, the presented work facilitates the application of the finite element electromagnetic field analysis method coupling the hysteresis model.
Details
Keywords
Lingling Pei, Qin Li and Zhengxin Wang
The purpose of this paper is to propose a new method based on nonlinear least squares (NLS) for solving the parameters of nonlinear grey Bernoulli model (NGBM(1,1)) and to verify…
Abstract
Purpose
The purpose of this paper is to propose a new method based on nonlinear least squares (NLS) for solving the parameters of nonlinear grey Bernoulli model (NGBM(1,1)) and to verify the proposed model using the case of employee demand prediction of high-tech enterprises in China.
Design/methodology/approach
First of all, minimising the square sum of fitting error of grey differential equation of NGBM(1,1) is taken as the optimisation target and the parameters of classic grey model (GM(1,1)) are set as the initial value of parameter vector. Afterwards, the structural parameters and power exponents are solved by using the Gauss-Newton iteration algorithm so as to calculate the parameters of NGBM(1,1) under given rules for ceasing the algorithm. Finally, by taking the employee demand of high-tech enterprises in the state-level high-tech industrial development zone in China as examples, the validity of the new method is verified.
Findings
The results show that the parameter estimation algorithm based on the NLS method can effectively identify the power exponents of NGBM(1,1) and therefore can favourably adapt to the nonlinear fluctuations of sequences. In addition, the algorithm is superior to the GM(1,1) model, grey Verhulst model, and Quadratic-Exponential smoothing algorithm in terms of the simulation and prediction accuracy.
Research limitations/implications
Under the framework of solving parameters based on NLS, various aspects of NGBM(1,1) remain to be further investigated including background value, initial condition and variable structural modelling methods.
Practical implications
The parameter estimation algorithm based on NLS can effectively identify the power exponent of NGBM(1,1) and therefore it can favourably adapt to the nonlinear fluctuation of sequences.
Originality/value
According to the basic principle of NLS, a new method for solving the parameters of NGBM(1,1) is proposed by using the Gauss-Newton iteration algorithm. Moreover, by conducting the modelling case about employees demand in high-tech enterprises in China, the effectiveness and superiority of the new method are verified.
Details
Keywords
Alivarani Mohapatra, Byamakesh Nayak and Kanungo Barada Mohanty
This paper aims to propose a simple, derivative-free novel method named as Nelder–Mead optimization algorithm to estimate the unknown parameters of the photovoltaic (PV) module…
Abstract
Purpose
This paper aims to propose a simple, derivative-free novel method named as Nelder–Mead optimization algorithm to estimate the unknown parameters of the photovoltaic (PV) module considering the environmental conditions.
Design/methodology/approach
At a particular temperature and irradiation, experimental current-voltage (I-V) and power-voltage (P-V) characteristics are drawn and considered as a reference model. The PV system model with unknown model parameters is considered as the adaptive model whose unknown model parameters are to be adapted so that the simulated characteristics closely matches with the experimental characteristics. A single diode (Rsh) model with five unknown model parameters is considered here for the parameter estimation.
Findings
The key advantages of this method are that parameters are estimated considering environmental conditions. Experimental characteristics are considered for parameter estimation which gives accurate results. Parameters are estimated considering both I-V and P-V curves as most of the applications demand extraction of the actual power from the PV module.
Originality/value
The proposed model is compared with other three well-known models available in the literature considering various statistical errors. The results show the superiority of the proposed model with a minimum error for both I-V and P-V characteristics.
Details
Keywords
Ruilin Yu, Yuxin Zhang, Luyao Wang and Xinyi Du
Time headway (THW) is an essential parameter in traffic safety and is used as a typical control variable by many vehicle control algorithms, especially in safety-critical ADAS and…
Abstract
Purpose
Time headway (THW) is an essential parameter in traffic safety and is used as a typical control variable by many vehicle control algorithms, especially in safety-critical ADAS and automated driving systems. However, due to the randomness of human drivers, THW cannot be accurately represented, affecting scholars’ more profound research.
Design/methodology/approach
In this work, two data sets are used as the experimental data to calculate the goodness-of-fit of 18 commonly used distribution models of THW to select the best distribution model. Subsequently, the characteristic parameters of traffic flow are extracted from the data set, and three variables with higher importance are extracted using the random forest model. Combining the best distribution model parameters of the data set, this study obtained a distribution model with adaptive parameters, and its performance and applicability are verified.
Findings
In this work, two data sets are used as the experimental data to calculate the goodness-of-fit of 18 commonly used distribution models of THW to select the best distribution model. Subsequently, the characteristic parameters of traffic flow are extracted from the data set, and three variables with higher importance are extracted using the random forest model. Combining the best distribution model parameters of the data set, this study obtained a distribution model with adaptive parameters, and its performance and applicability are verified.
Originality/value
The results show that the proposed model has a 62.7% performance improvement over the distribution model with fixed parameters. Moreover, the parameter function of the distribution model can be regarded as a quantitative analysis of the degree of influence of the traffic flow state on THW.
Details
Keywords
Saurabh Prabhu, Sez Atamturktur and Scott Cogan
This paper aims to focus on the assessment of the ability of computer models with imperfect functional forms and uncertain input parameters to represent reality.
Abstract
Purpose
This paper aims to focus on the assessment of the ability of computer models with imperfect functional forms and uncertain input parameters to represent reality.
Design/methodology/approach
In this assessment, both the agreement between a model’s predictions and available experiments and the robustness of this agreement to uncertainty have been evaluated. The concept of satisfying boundaries to represent input parameter sets that yield model predictions with acceptable fidelity to observed experiments has been introduced.
Findings
Satisfying boundaries provide several useful indicators for model assessment, and when calculated for varying fidelity thresholds and input parameter uncertainties, reveal the trade-off between the robustness to uncertainty in model parameters, the threshold for satisfactory fidelity and the probability of satisfying the given fidelity threshold. Using a controlled case-study example, important modeling decisions such as acceptable level of uncertainty, fidelity requirements and resource allocation for additional experiments are shown.
Originality/value
Traditional methods of model assessment are solely based on fidelity to experiments, leading to a single parameter set that is considered fidelity-optimal, which essentially represents the values which yield the optimal compensation between various sources of errors and uncertainties. Rather than maximizing fidelity, this study advocates for basing model assessment on the model’s ability to satisfy a required fidelity (or error tolerance). Evaluating the trade-off between error tolerance, parameter uncertainty and probability of satisfying this predefined error threshold provides us with a powerful tool for model assessment and resource allocation.
Details
Keywords
Pedro Brinca, Nikolay Iskrev and Francesca Loria
Since its introduction by Chari, Kehoe, and McGrattan (2007), Business Cycle Accounting (BCA) exercises have become widespread. Much attention has been devoted to the results of…
Abstract
Since its introduction by Chari, Kehoe, and McGrattan (2007), Business Cycle Accounting (BCA) exercises have become widespread. Much attention has been devoted to the results of such exercises and to methodological departures from the baseline methodology. Little attention has been paid to identification issues within these classes of models. In this chapter, the authors investigate whether such issues are of concern in the original methodology and in an extension proposed by Šustek (2011) called Monetary Business Cycle Accounting. The authors resort to two types of identification tests in population. One concerns strict identification as theorized by Komunjer and Ng (2011) while the other deals both with strict and weak identification as in Iskrev (2010). Most importantly, the authors explore the extent to which these weak identification problems affect the main economic takeaways and find that the identification deficiencies are not relevant for the standard BCA model. Finally, the authors compute some statistics of interest to practitioners of the BCA methodology.
Details
Keywords
Enrique Martínez-García and Mark A. Wynne
We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of…
Abstract
We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.
Details
Keywords
Francesco Ravazzolo, Richard Paap, Dick van Dijk and Philip Hans Franses
This chapter develops a return forecasting methodology that allows for instability in the relationship between stock returns and predictor variables, model uncertainty, and…
Abstract
This chapter develops a return forecasting methodology that allows for instability in the relationship between stock returns and predictor variables, model uncertainty, and parameter estimation uncertainty. The predictive regression specification that is put forward allows for occasional structural breaks of random magnitude in the regression parameters, uncertainty about the inclusion of forecasting variables, and uncertainty about parameter values by employing Bayesian model averaging. The implications of these three sources of uncertainty and their relative importance are investigated from an active investment management perspective. It is found that the economic value of incorporating all three sources of uncertainty is considerable. A typical investor would be willing to pay up to several hundreds of basis points annually to switch from a passive buy-and-hold strategy to an active strategy based on a return forecasting model that allows for model and parameter uncertainty as well as structural breaks in the regression parameters.
Constructing and evaluating behavioral science models is a complex process. Decisions must be made about which variables to include, which variables are related to each other, the…
Abstract
Constructing and evaluating behavioral science models is a complex process. Decisions must be made about which variables to include, which variables are related to each other, the functional forms of the relationships, and so on. The last 10 years have seen a substantial extension of the range of statistical tools available for use in the construction process. The progress in tool development has been accompanied by the publication of handbooks that introduce the methods in general terms (Arminger et al., 1995; Tinsley & Brown, 2000a). Each chapter in these handbooks cites a wide range of books and articles on specific analysis topics.