Search results

1 – 10 of over 2000
Article
Publication date: 14 August 2017

Joonwook Park, Priyali Rajagopal, William Dillon, Seoil Chaiy and Wayne DeSarbo

Joint space multidimensional scaling (MDS) maps are often utilized for positioning analyses and are estimated with survey data of consumer preferences, choices, considerations…

Abstract

Purpose

Joint space multidimensional scaling (MDS) maps are often utilized for positioning analyses and are estimated with survey data of consumer preferences, choices, considerations, intentions, etc. so as to provide a parsimonious spatial depiction of the competitive landscape. However, little attention has been given to the possibility that consumers may display heterogeneity in their information usage (Bettman et al., 1998) and the possible impact this may have on the corresponding estimated joint space maps. This paper aims to address this important issue and proposes a new Bayesian multidimensional unfolding model for the analysis of two or three-way dominance (e.g. preference) data. The authors’ new MDS model explicitly accommodates dimension selection and preference heterogeneity simultaneously in a unified framework.

Design/methodology/approach

This manuscript introduces a new Bayesian hierarchical spatial MDS model with accompanying Markov chain Monte Carlo algorithm for estimation that explicitly places constraints on a set of scale parameters in such a way as to model a consumer using or not using each latent dimension in forming his/her preferences while at the same time permitting consumers to differentially weigh each utilized latent dimension. In this manner, both preference heterogeneity and dimensionality selection heterogeneity are modeled simultaneously.

Findings

The superiority of this model over existing spatial models is demonstrated in both the case of simulated data, where the structure of the data is known in advance, as well as in an empirical application/illustration relating to the positioning of digital cameras. In the empirical application/illustration, the policy implications of accounting for the presence of dimensionality selection heterogeneity is shown to be derived from the Bayesian spatial analyses conducted. The results demonstrate that a model that incorporates dimensionality selection heterogeneity outperforms models that cannot recognize that consumers may be selective in the product information that they choose to process. Such results also show that a marketing manager may encounter biased parameter estimates and distorted market structures if he/she ignores such dimensionality selection heterogeneity.

Research limitations/implications

The proposed Bayesian spatial model provides information regarding how individual consumers utilize each dimension and how the relationship with behavioral variables can help marketers understand the underlying reasons for selective dimensional usage. Further, the proposed approach helps a marketing manager to identify major dimension(s) that could maximize the effect of a change of brand positioning, and thus identify potential opportunities/threats that existing MDS methods cannot provides.

Originality/value

To date, no existent spatial model utilized for brand positioning can accommodate the various forms of heterogeneity exhibited by real consumers mentioned above. The end result can be very inaccurate and biased portrayals of competitive market structure whose strategy implications may be wrong and non-optimal. Given the role of such spatial models in the classical segmentation-targeting-positioning paradigm which forms the basis of all marketing strategy, the value of such research can be dramatic in many marketing applications, as illustrated in the manuscript via analyses of both synthetic and actual data.

Details

Journal of Modelling in Management, vol. 12 no. 3
Type: Research Article
ISSN: 1746-5664

Keywords

Book part
Publication date: 19 November 2014

Elías Moreno and Luís Raúl Pericchi

We put forward the idea that for model selection the intrinsic priors are becoming a center of a cluster of a dominant group of methodologies for objective Bayesian Model Selection

Abstract

We put forward the idea that for model selection the intrinsic priors are becoming a center of a cluster of a dominant group of methodologies for objective Bayesian Model Selection.

The intrinsic method and its applications have been developed in the last two decades, and has stimulated closely related methods. The intrinsic methodology can be thought of as the long searched approach for objective Bayesian model selection and hypothesis testing.

In this paper we review the foundations of the intrinsic priors, their general properties, and some of their applications.

Details

Bayesian Model Comparison
Type: Book
ISBN: 978-1-78441-185-5

Keywords

Book part
Publication date: 19 November 2014

Enrique Martínez-García and Mark A. Wynne

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of…

Abstract

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.

Article
Publication date: 1 April 2006

María M. Abad‐Grau and Daniel Arias‐Aranda

Information analysis tools enhance the possibilities of firm competition in terms of knowledge management. However, the generalization of decision support systems (DSS) is still…

2151

Abstract

Purpose

Information analysis tools enhance the possibilities of firm competition in terms of knowledge management. However, the generalization of decision support systems (DSS) is still far away from everyday use by managers and academicians. This paper aims to present a framework of analysis based on Bayesian networks (BN) whose accuracy is measured in order to assess scientific evidence.

Design/methodology/approach

Different learning algorithms based on BN are applied to extract relevant information about the relationship between operations strategy and flexibility in a sample of engineering consulting firms. Feature selection algorithms automatically are able to improve the accuracy of these classifiers.

Findings

Results show that the behaviors of the firms can be reduced to different rules that help in the decision‐making process about investments in technology and production resources.

Originality/value

Contrasting with methods from the classic statistics, Bayesian classifiers are able to model a variety of relationships between the variables affecting the dependent variable. Contrasting with other methods from the artificial intelligence field, such as neural networks or support vector machines, Bayesian classifiers are white‐box models that can directly be interpreted. Together with feature selection techniques from the machine learning field, they are able to automatically learn a model that accurately fits the data.

Details

Industrial Management & Data Systems, vol. 106 no. 4
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 16 March 2010

Leonidas A. Zampetakis and Vassilis S. Moustakis

The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling and…

Abstract

Purpose

The purpose of this paper is to present an inductive methodology, which supports ranking of entities. Methodology is based on Bayesian latent variable measurement modeling and makes use of assessment across composite indicators to assess internal and external model validity (uncertainty is used in lieu of validity). Proposed methodology is generic and it is demonstrated on a well‐known data set, related to the relative position of a country in a “doing business.”

Design/methodology/approach

The methodology is demonstrated using data from the World Banks' “Doing Business 2008” project. A Bayesian latent variable measurement model is developed and both internal and external model uncertainties are considered.

Findings

The methodology enables the quantification of model structure uncertainty through comparisons among competing models, nested or non‐nested using both an information theoretic approach and a Bayesian approach. Furthermore, it estimates the degree of uncertainty in the rankings of alternatives.

Research limitations/implications

Analyses are restricted to first‐order Bayesian measurement models.

Originality/value

Overall, the presented methodology contributes to a better understanding of ranking efforts providing a useful tool for those who publish rankings to gain greater insights into the nature of the distinctions they disseminate.

Details

Journal of Modelling in Management, vol. 5 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Book part
Publication date: 1 January 2008

Dimitris Korobilis

This paper addresses the issue of improving the forecasting performance of vector autoregressions (VARs) when the set of available predictors is inconveniently large to handle…

Abstract

This paper addresses the issue of improving the forecasting performance of vector autoregressions (VARs) when the set of available predictors is inconveniently large to handle with methods and diagnostics used in traditional small-scale models. First, available information from a large dataset is summarized into a considerably smaller set of variables through factors estimated using standard principal components. However, even in the case of reducing the dimension of the data the true number of factors may still be large. For that reason I introduce in my analysis simple and efficient Bayesian model selection methods. Model estimation and selection of predictors is carried out automatically through a stochastic search variable selection (SSVS) algorithm which requires minimal input by the user. I apply these methods to forecast 8 main U.S. macroeconomic variables using 124 potential predictors. I find improved out-of-sample fit in high-dimensional specifications that would otherwise suffer from the proliferation of parameters.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Abstract

Details

Applying Maximum Entropy to Econometric Problems
Type: Book
ISBN: 978-0-76230-187-4

Open Access
Article
Publication date: 25 June 2020

Paula Cruz-García, Anabel Forte and Jesús Peiró-Palomino

There is abundant literature analyzing the determinants of banks’ profitability through its main component: the net interest margin. Some of these determinants are suggested by…

1930

Abstract

Purpose

There is abundant literature analyzing the determinants of banks’ profitability through its main component: the net interest margin. Some of these determinants are suggested by seminal theoretical models and subsequent expansions. Others are ad-hoc selections. Up to now, there are no studies assessing these models from a Bayesian model uncertainty perspective. This paper aims to analyze this issue for the EU-15 countries for the period 2008-2014, which mainly corresponds to the Great Recession years.

Design/methodology/approach

It follows a Bayesian variable selection approach to analyze, in a first step, which variables of those suggested by the literature are actually good predictors of banks’ net interest margin. In a second step, using a model selection approach, the authors select the model with the best fit. Finally, the paper provides inference and quantifies the economic impact of the variables selected as good candidates.

Findings

The results widely support the validity of the determinants proposed by the seminal models, with only minor discrepancies, reinforcing their capacity to explain net interest margin disparities also during the recent period of restructuring of the banking industry.

Originality/value

The paper is, to the best of the knowledge, the first one following a Bayesian variable selection approach in this field of the literature.

Details

Applied Economic Analysis, vol. 28 no. 83
Type: Research Article
ISSN: 2632-7627

Keywords

Book part
Publication date: 19 November 2014

Guillaume Weisang

In this paper, I propose an algorithm combining adaptive sampling and Reversible Jump MCMC to deal with the problem of variable selection in time-varying linear model. These types…

Abstract

In this paper, I propose an algorithm combining adaptive sampling and Reversible Jump MCMC to deal with the problem of variable selection in time-varying linear model. These types of model arise naturally in financial application as illustrated by a motivational example. The methodology proposed here, dubbed adaptive reversible jump variable selection, differs from typical approaches by avoiding estimation of the factors and the difficulties stemming from the presence of the documented single factor bias. Illustrated by several simulated examples, the algorithm is shown to select the appropriate variables among a large set of candidates.

Book part
Publication date: 18 January 2022

Andreas Pick and Matthijs Carpay

This chapter investigates the performance of different dimension reduction approaches for large vector autoregressions in multi-step ahead forecasts. The authors consider factor…

Abstract

This chapter investigates the performance of different dimension reduction approaches for large vector autoregressions in multi-step ahead forecasts. The authors consider factor augmented VAR models using principal components and partial least squares, random subset regression, random projection, random compression, and estimation via LASSO and Bayesian VAR. The authors compare the accuracy of iterated and direct multi-step point and density forecasts. The comparison is based on macroeconomic and financial variables from the FRED-MD data base. Our findings suggest that random subspace methods and LASSO estimation deliver the most precise forecasts.

Details

Essays in Honor of M. Hashem Pesaran: Prediction and Macro Modeling
Type: Book
ISBN: 978-1-80262-062-7

Keywords

1 – 10 of over 2000