Search results

1 – 10 of over 3000
Article
Publication date: 1 August 2003

Zakir Hossain and M. Ishaq Bhatti

This paper briefly introduces the concept of model selection, reviews recent development in the area of econometric analysis of model selection and addresses some of the crucial…

Abstract

This paper briefly introduces the concept of model selection, reviews recent development in the area of econometric analysis of model selection and addresses some of the crucial issues that are being faced by researchers in their routine research problems. The paper emphasizes on the importance of model selection, particularly the information criteria and penalty functions based model selection procedures which are useful for economists and finance researchers.

Details

Managerial Finance, vol. 29 no. 7
Type: Research Article
ISSN: 0307-4358

Keywords

Abstract

Details

Applying Maximum Entropy to Econometric Problems
Type: Book
ISBN: 978-0-76230-187-4

Book part
Publication date: 15 April 2020

Yubo Tao and Jun Yu

This chapter examines the limit properties of information criteria (such as AIC, BIC, and HQIC) for distinguishing between the unit-root (UR) model and the various kinds of…

Abstract

This chapter examines the limit properties of information criteria (such as AIC, BIC, and HQIC) for distinguishing between the unit-root (UR) model and the various kinds of explosive models. The explosive models include the local-to-unit-root model from the explosive side the mildly explosive (ME) model, and the regular explosive model. Initial conditions with different orders of magnitude are considered. Both the OLS estimator and the indirect inference estimator are studied. It is found that BIC and HQIC, but not AIC, consistently select the UR model when data come from the UR model. When data come from the local-to-unit-root model from the explosive side, both BIC and HQIC select the wrong model with probability approaching 1 while AIC has a positive probability of selecting the right model in the limit. When data come from the regular explosive model or from the ME model in the form of 1 + nα/n with α ∈ (0, 1), all three information criteria consistently select the true model. Indirect inference estimation can increase or decrease the probability for information criteria to select the right model asymptotically relative to OLS, depending on the information criteria and the true model. Simulation results confirm our asymptotic results in finite sample.

Article
Publication date: 20 September 2021

R. Scott Hacker and Abdulnasser Hatemi-J

The issue of model selection in applied research is of vital importance. Since the true model in such research is not known, which model should be used from among various…

Abstract

Purpose

The issue of model selection in applied research is of vital importance. Since the true model in such research is not known, which model should be used from among various potential ones is an empirical question. There might exist several competitive models. A typical approach to dealing with this is classic hypothesis testing using an arbitrarily chosen significance level based on the underlying assumption that a true null hypothesis exists. In this paper, the authors investigate how successful the traditional hypothesis testing approach is in determining the correct model for different data generating processes using time series data. An alternative approach based on more formal model selection techniques using an information criterion or cross-validation is also investigated.

Design/methodology/approach

Monte Carlo simulation experiments on various generating processes are used to look at the response surfaces resulting from hypothesis testing and response surfaces resulting from model selection based on minimizing an information criterion or the leave-one-out cross-validation prediction error.

Findings

The authors find that the minimization of an information criterion can work well for model selection in a time series environment, often performing better than hypothesis-testing strategies. In such an environment, the use of an information criterion can help reduce the number of models for consideration, but the authors recommend the use of other methods also, including hypothesis testing, to determine the appropriateness of a model.

Originality/value

This paper provides an alternative approach for selecting the best potential model among many for time series data. It demonstrates how minimizing an information criterion can be useful for model selection in a time-series environment in comparison to some standard hypothesis testing strategies.

Details

Journal of Economic Studies, vol. 49 no. 6
Type: Research Article
ISSN: 0144-3585

Keywords

Article
Publication date: 20 August 2021

Foued Khlifi

The purpose of this paper is to shed light on the relationship between the Internet Financial Reporting (IFR) levels and corporate characteristics. It is assumed that the…

252

Abstract

Purpose

The purpose of this paper is to shed light on the relationship between the Internet Financial Reporting (IFR) levels and corporate characteristics. It is assumed that the relationship between the disclosure level and its determinants is known. Nevertheless, the results of the empirical studies confirm that it is a naive assumption. As a result, the author suggests refusing the conventional methods of econometric analysis.

Design/methodology/approach

The research methodology consisted of four stages: First, the author tried to select the “best” model using the Akaike Information Criterion (AIC). Second, the author checked out the stability of the relationship between corporate disclosure level and its determinants. Third, the regression analysis was used. Finally, the author proposed a “genetic-fuzzy system” for studying the determinants of corporate disclosure. The firms' yearly data collected consisted of a random sample of 152 Tunisian companies' websites.

Findings

The results show that the variables that should be used to explain the level of IFR are firm size, ownership concentration, firm performance and liquidity. The Chow forecast test shows that there is a significant and large difference between the actual and the predicted values. Consequently, the author suggests using non-parametric methods, particularly a methodology based on fuzzy logic concepts and genetic algorithms. This technique would allow the author to discover the true form of the relationship between the disclosure level and its determinants. Regarding the hypotheses of this study, the findings of the “genetic-fuzzy system” validate all the hypotheses. Indeed, the arguments of the agency theory, the signaling theory, and the political cost hypothesis were supported using the “genetic-fuzzy system.”

Originality/value

The originality of the paper lies in providing a new research methodology based on several statistical tools for dealing with an important research topic in accounting and finance, i.e. the determinants of IFR. The results of this study can be considered as a starting point to develop a unified methodology.

Details

EuroMed Journal of Business, vol. 17 no. 4
Type: Research Article
ISSN: 1450-2194

Keywords

Article
Publication date: 1 August 1998

Hiroaki Seto

Savage concentrated on building a small world, which is not a probabilistic, but the definite world, in which sure‐thing principle works. He reached Kullback‐Leibler’s information

Abstract

Savage concentrated on building a small world, which is not a probabilistic, but the definite world, in which sure‐thing principle works. He reached Kullback‐Leibler’s information through Bayes’ theorem, in which he intends to improve personal probability as the a posteriori probability. However, he stopped his thinking there. Akaike obtained Akaike Information Criterion (AIC) by starting from the K‐L information. AIC enables us to evaluate which model is the closest to the true value which we cannot recognise. If we call the context of sure‐thing principle personal probability, Bayes’ theorem and AIC the logical structure of information, the author thinks we have the same structure in relation to the Japanese production and distribution system.

Details

Logistics Information Management, vol. 11 no. 4
Type: Research Article
ISSN: 0957-6053

Keywords

Book part
Publication date: 15 January 2010

Matthieu de Lapparent

This article addresses simultaneously two important features in random utility maximisation (RUM) choice modelling: choice set generation and unobserved taste heterogeneity. It is…

Abstract

This article addresses simultaneously two important features in random utility maximisation (RUM) choice modelling: choice set generation and unobserved taste heterogeneity. It is proposed to develop and to compare definitions and properties of econometric specifications that are based on mixed logit (MXL) and latent class logit (LCL) RUM models in the additional presence of prior compensatory screening decision rules. The latter allow for continuous latent bounds that determine choice alternatives to be or not to be considered for decision making. It is also proposed to evaluate and to test each against the other ones in an application to home-to-work mode choice in the Paris region of France using 2002 data.

Details

Choice Modelling: The State-of-the-art and The State-of-practice
Type: Book
ISBN: 978-1-84950-773-8

Article
Publication date: 11 February 2019

Mohsen Ahmadi and Rahim Taghizadeh

The purpose of this paper is to focus on modeling economy growth with indicators of knowledge-based economy (KBE) introduced by World Bank for a case study in Iran during…

Abstract

Purpose

The purpose of this paper is to focus on modeling economy growth with indicators of knowledge-based economy (KBE) introduced by World Bank for a case study in Iran during 1993-2013.

Design/methodology/approach

First, for grouping and reducing the number of variables, Tukey method and the principal component analysis are used. Also for modeling, 67 per cent of data is used for training in the two approaches of ARDL bounds testing and gene expression programming (GEP) and 33 per cent of them for testing the models. Then, the result models are compared with fitness function and Akaike information criteria (AIC).

Findings

The GEP model with fitness 945.7461 for training data and 954.8403 for testing data from 1000 is better than ARDL bounds testing model with fitness 335.5479 from 1000. In addition, according to model comparison tools (AIC), the GEP model has an extremely larger weight in comparison with ARDL bounds model. Therefore, the GEP model is introduced for future use in academia.

Practical implications

Knowledge and information is one of the most basic sources of wealth in economists’ sight. Thus, using KBE indicators appears essential in economic growth regarding daily progress in knowledge processes and its different theories. It is also extremely important to determine an appropriate model for KBE indicators which play a highly important role in the allocation of the economic resources of the country in an optimal manner.

Originality/value

This paper introduced a novel expression for economy growth using KBE indicators. All the data and the indicators are extracted from Word Bank service between 1993 and 2013.

Details

Journal of Modelling in Management, vol. 14 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 1 April 2006

Zakir Hossain, Quazi Abdus Samad and Zulficar Ali

The purpose of this paper is to generate three types of forecasts, namely, historical, ex‐post and ex‐ante, using the world famous Box‐Jenkins time series models for motor, mash…

1187

Abstract

Purpose

The purpose of this paper is to generate three types of forecasts, namely, historical, ex‐post and ex‐ante, using the world famous Box‐Jenkins time series models for motor, mash and mung prices in Bangladesh.

Design/methodology/approach

The models on the basis of which these forecasts have been computed were selected by six important information criteria such as Akaike's Information Criterion (AIC), Schwarz's Bayesian Information Criterion (BIC), Theil's R2, Theil's R2, SE(σ) and Mean Absolute Percent Errors (MAPEs). In order to examine the forecasting performance of the selected models, three types of forecast errors were estimated, i.e. root mean square percent errors (RMSPEs), mean percent forecast errors (MPFEs) and Theil's inequality coefficients (TICs).

Findings

The estimates suggest that in most cases the forecasting performances of the models in question are quite satisfactory.

Originality/value

The models developed in this paper can be used for policy purposes as far as price forecasts of the commodities are concerned.

Details

International Journal of Social Economics, vol. 33 no. 4
Type: Research Article
ISSN: 0306-8293

Keywords

Article
Publication date: 17 April 2001

Ken Nishina

This paper considers a recent environment in the manufacturing process in which data in large amounts can be obtained on‐line in real‐time. Under this environment an on‐line…

Abstract

This paper considers a recent environment in the manufacturing process in which data in large amounts can be obtained on‐line in real‐time. Under this environment an on‐line real‐time Statistical Process Control (SPC) scheme equipped with detection of a process change, change‐point estimation, and recognition of the change pattern is proposed. The proposed SPC scheme is composed of a Cusum chart, filtering methods and Akaike Information Criterion (AIC). We examine the performance of this scheme by Monte Carlo simulation and show its usefulness.

Details

Asian Journal on Quality, vol. 2 no. 1
Type: Research Article
ISSN: 1598-2688

Keywords

1 – 10 of over 3000