Search results1 – 10 of over 71000
A critical concern for corporations test marketing new products involves test market city evaluation. In order for test marketing to be successful, corporations must…
A critical concern for corporations test marketing new products involves test market city evaluation. In order for test marketing to be successful, corporations must identify cities that offer a good fit with the firm’s overall product strategy. Unfortunately, little has been written to aid corporations in making complex test city selection decisions. Presents a model that combines the concepts of marketing, the management science technique of goal programming, and microcomputer technology to provide managers with a more effective and efficient method for evaluating test cities and making selection decisions. Extends the existing literature on test market evaluation by applying a computer optimization model to test market evaluation in a way that has not been done before.
We extend Vuong’s (1989) model-selection statistic to allow for complex survey samples. As a further extension, we use an M-estimation setting so that the tests apply to…
We extend Vuong’s (1989) model-selection statistic to allow for complex survey samples. As a further extension, we use an M-estimation setting so that the tests apply to general estimation problems – such as linear and nonlinear least squares, Poisson regression and fractional response models, to name just a few – and not only to maximum likelihood settings. With stratified sampling, we show how the difference in objective functions should be weighted in order to obtain a suitable test statistic. Interestingly, the weights are needed in computing the model-selection statistic even in cases where stratification is appropriately exogenous, in which case the usual unweighted estimators for the parameters are consistent. With cluster samples and panel data, we show how to combine the weighted objective function with a cluster-robust variance estimator in order to expand the scope of the model-selection tests. A small simulation study shows that the weighted test is promising.
This paper briefly introduces the concept of model selection, reviews recent development in the area of econometric analysis of model selection and addresses some of the…
This paper briefly introduces the concept of model selection, reviews recent development in the area of econometric analysis of model selection and addresses some of the crucial issues that are being faced by researchers in their routine research problems. The paper emphasizes on the importance of model selection, particularly the information criteria and penalty functions based model selection procedures which are useful for economists and finance researchers.
We put forward the idea that for model selection the intrinsic priors are becoming a center of a cluster of a dominant group of methodologies for objective Bayesian Model…
We put forward the idea that for model selection the intrinsic priors are becoming a center of a cluster of a dominant group of methodologies for objective Bayesian Model Selection.
The intrinsic method and its applications have been developed in the last two decades, and has stimulated closely related methods. The intrinsic methodology can be thought of as the long searched approach for objective Bayesian model selection and hypothesis testing.
In this paper we review the foundations of the intrinsic priors, their general properties, and some of their applications.
In this chapter we discuss model selection and predictive accuracy tests in the context of parameter and model uncertainty under recursive and rolling estimation schemes…
In this chapter we discuss model selection and predictive accuracy tests in the context of parameter and model uncertainty under recursive and rolling estimation schemes. We begin by summarizing some recent theoretical findings, with particular emphasis on the construction of valid bootstrap procedures for calculating the impact of parameter estimation error. We then discuss the Corradi and Swanson (2002) (CS) test of (non)linear out-of-sample Granger causality. Thereafter, we carry out a series of Monte Carlo experiments examining the properties of the CS and a variety of other related predictive accuracy and model selection type tests. Finally, we present the results of an empirical investigation of the marginal predictive content of money for income, in the spirit of Stock and Watson (1989), Swanson (1998) and Amato and Swanson (2001).
We review the experimental evidence on risk aversion in controlled laboratory settings. We review the strengths and weaknesses of alternative elicitation procedures, the strengths and weaknesses of alternative estimation procedures, and finally the effect of controlling for risk attitudes on inferences in experiments.
Panel data-based demand forecasting models have been widely adopted in various industrial settings over the past few decades. Despite being a highly versatile and…
Panel data-based demand forecasting models have been widely adopted in various industrial settings over the past few decades. Despite being a highly versatile and intuitive method, in the literature, there is a lack of comprehensive review examining the strengths, the weaknesses, and the industrial applications of panel data-based demand forecasting models. The purpose of this paper is to fill this gap by reviewing and exploring the features of various main stream panel data-based demand forecasting models. A novel process, in the form of a flowchart, which helps practitioners to select the right panel data models for real world industrial applications, is developed. Future research directions are proposed and discussed.
It is a review paper. A systematically searched and carefully selected number of panel data-based forecasting models are examined analytically. Their features are also explored and revealed.
This paper is the first one which reviews the analytical panel data models specifically for demand forecasting applications. A novel model selection process is developed to assist decision makers to select the right panel data models for their specific demand forecasting tasks. The strengths, weaknesses, and industrial applications of different panel data-based demand forecasting models are found. Future research agenda is proposed.
This review covers most commonly used and important panel data-based models for demand forecasting. However, some hybrid models, which combine the panel data-based models with other models, are not covered.
The reviewed panel data-based demand forecasting models are applicable in the real world. The proposed model selection flowchart is implementable in practice and it helps practitioners to select the right panel data-based models for the respective industrial applications.
This paper is the first one which reviews the analytical panel data models specifically for demand forecasting applications. It is original.
This chapter analyzes the empirical relationship between the pricesetting/consumption behavior and the sources of persistence in inflation and output. First, a small-scale…
This chapter analyzes the empirical relationship between the pricesetting/consumption behavior and the sources of persistence in inflation and output. First, a small-scale New-Keynesian model (NKM) is examined using the method of moment and maximum likelihood estimators with US data from 1960 to 2007. Then a formal test is used to compare the fit of two competing specifications in the New-Keynesian Phillips Curve (NKPC) and the IS equation, that is, backward- and forward-looking behavior. Accordingly, the inclusion of a lagged term in the NKPC and the IS equation improves the fit of the model while offsetting the influence of inherited and extrinsic persistence; it is shown that intrinsic persistence plays a major role in approximating inflation and output dynamics for the Great Inflation period. However, the null hypothesis cannot be rejected at the 5% level for the Great Moderation period, that is, the NKM with purely forward-looking behavior and its hybrid variant are equivalent. Monte Carlo experiments investigate the validity of chosen moment conditions and the finite sample properties of the chosen estimation methods. Finally, the empirical performance of the formal test is discussed along the lines of the Akaike's and the Bayesian information criterion.
The purpose of the chapter is to test the hypothesis that food safety (chemical) standards act as barriers to international seafood imports. We use zero-accounting gravity…
The purpose of the chapter is to test the hypothesis that food safety (chemical) standards act as barriers to international seafood imports. We use zero-accounting gravity models to test the hypothesis that food safety (chemical) standards act as barriers to international seafood imports. The chemical standards on which we focus include chloramphenicol required performance limit, oxytetracycline maximum residue limit, fluoro-quinolones maximum residue limit, and dichlorodiphenyltrichloroethane (DDT) pesticide residue limit. The study focuses on the three most important seafood markets: the European Union’s 15 members, Japan, and North America.Our empirical results confirm the hypothesis and are robust to the OLS as well as alternative zero-accounting gravity models such as the Heckman estimation and the Poisson family regressions. For the choice of the best model specification to account for zero trade and heteroskedastic issues, it is inconclusive to base on formal statistical tests; however, the Heckman sample selection and zero-inflated negative binomial (ZINB) models provide the most reliable parameter estimates based on the statistical tests, magnitude of coefficients, economic implications, and the literature findings. Our findings suggest that continually tightening of seafood safety standards has had a negative impact on exporting countries. Increasing the stringency of regulations by reducing analytical limits or maximum residue limits in seafood in developed countries has negative impacts on their bilateral seafood imports. The chapter furthers the literature on food safety standards on international trade. We show competing gravity model specifications and provide additional evidence that no one gravity model is superior.
In today's societies, work environment and customers' expectations change on a daily manner. Consequently, it is crucial for companies to find a way for adapting…
In today's societies, work environment and customers' expectations change on a daily manner. Consequently, it is crucial for companies to find a way for adapting themselves to new requirements. For this purpose, reengineering projects have been introduced and evolved in different companies with different responsibilities over the past decades. However, the risk associated with these projects is inevitable and is a huge obstacle on the way of their implementation. This study, in line with previous studies, contributed in this context by proposing a new methodology for selecting suitable processes and adopted best practices candidate for business process reengineering (BPR). The proposed methodology aims to achieve lower risk and higher probability of success for BPR projects.
This objective is achieved by integration of the concept of portfolio selection problems (PSP) into the organizational decision making concerning BPR project. A model for selection of most appropriate reengineering scenarios, which is a combination of processes and best practices, is adopted and proposed. This model by putting additional constraints on risks associated with a BPR project and increasing its return identifies the most prosperous portfolio of scenarios for a reengineering project. The proposed model is tested step-by-step through a case study in order to validate its outcome and justify its practicality.
In this paper, a new methodology is proposed containing a model as a managerial tool for conducting more successful reengineering projects. The applicability of the methodology is tested in one of the largest metallurgical laboratory and research centers of Iran. Four strategic processes were selected and several best practices customized, after screening all processes of the case study. Accordingly, in total, 15 different scenarios were explored for the reengineering project in which four of them identified by the model as the processes with the highest possibility of success through the BPR project.
This methodology suggests a novel way to benefit from PSP for process selection problems by putting additional control on implementation risk of reengineering project. While the urge of using reengineering project exists within the current companies, the high level of risk of these projects is considered as a huge obstacle in conducting this project. This study, by proposing a new method, aims to address this issue as well as point to the practicality of integrating PSP model in organizational contexts.