Search results

1 – 10 of over 28000
Article
Publication date: 13 April 2015

Mauricio Palmeira and Gerri Spassova

The purpose of this study is to investigate consumer reactions to professionals who use decision aids to make recommendations. The authors propose that people react negatively to…

1294

Abstract

Purpose

The purpose of this study is to investigate consumer reactions to professionals who use decision aids to make recommendations. The authors propose that people react negatively to decision aids only when they are used in place of human expert judgment. When used in combination with expert judgment, decision aids are not perceived negatively and may even enhance service evaluations.

Design/methodology/approach

Three online experiments are presented. Participants indicated their perceptions regarding the recommendation strategy of professionals and their impressions of these professionals using one of three strategies: one based on expertise only, one based on decision aids only and a combination of the two (hybrid approach). Both within and between-subjects designs were used.

Findings

Contrary to previous research that has found a negative reaction to professionals who use decision aids, the authors find that consumers actually appreciate these professionals, as long as the use of decision aids does not replace expert judgment. The authors also find that when people are given the opportunity to compare a pure expert judgment approach with a hybrid approach (decision aid in combination with expert judgment), they prefer the latter.

Research limitations/implications

Although findings should extend to various contexts, this research is limited to the three contexts examined and to the type of use of decision aid described.

Practical implications

It has significant practical implication, as decision aids have been shown to improve decision accuracy, but previous research had indicated that consumers view these professionals in a negative way. The current research more clearly delineates the situations under which negative reactions are likely to occur and makes recommendations regarding circumstances in which reactions are actually quite positive.

Originality/value

Reactions to professionals using decision aids have been investigated outside the marketing literature. However, this is the first work to show that consumers actually have positive reactions to professionals using decision aids, as long as they do not replace expert judgment.

Details

European Journal of Marketing, vol. 49 no. 3/4
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 11 November 2013

Erin Pleggenkuhle-Miles, Theodore A. Khoury, David L. Deeds and Livia Markoczy

This study aims to explore the objectivity in third-party ratings. Third-party ratings are often based on some form of aggregation of various experts' opinions with the assumption…

1352

Abstract

Purpose

This study aims to explore the objectivity in third-party ratings. Third-party ratings are often based on some form of aggregation of various experts' opinions with the assumption that the potential judgment biases of the experts cancel each other out. While psychology research has suggested that experts can be unintentionally biased, management literature has not considered the effect of expert bias on the objectivity of third-party ratings. Thus, this study seeks to address this issue.

Design/methodology/approach

Ranking data from the US News and World Report between 1993 and 2008, institution-related variables and, to represent sports prominence, NCAA football and basketball performance variables are leveraged in testing our hypotheses. A mediating-model is tested using regression with panel-corrected standard errors.

Findings

This study finds that the judgments of academicians and recruiters, concerning the quality of universities, have been biased by the prominence of a university's sports teams and that the bias introduced to these experts mediates the aggregated bias in the resultant rankings of MBA programs. Moreover, it finds that experts may inflate rankings by up to two positions.

Practical implications

This study is particularly relevant for university officials as it uncovers how universities can tangibly manipulate the relative perception of quality through sports team prominence. For third-party rating systems, the reliability of ratings based on aggregated expert judgments is called into question.

Originality/value

This study addresses a significant gap in the literature by examining how a rating system may be unintentionally biased through the aggregation of experts' judgments. Given the heavy reliance on third-party rating systems by both academics and the general population, addressing the objectivity of such ratings is crucial.

Details

Management Decision, vol. 51 no. 9
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 3 April 2009

Michael F. Cassidy and Dennis Buede

The purpose of this paper is to examine critically the accuracy of expert judgment, drawing on empirical evidence and theory from multiple disciplines. It suggests that counsel…

1206

Abstract

Purpose

The purpose of this paper is to examine critically the accuracy of expert judgment, drawing on empirical evidence and theory from multiple disciplines. It suggests that counsel offered with confidence by experts might, under certain circumstances, be without merit, and presents approaches to assessing the accuracy of such counsel.

Design/methodology/approach

The paper synthesizes research findings on expert judgment drawn from multiple fields, including psychology, criminal justice, political science, and decision analysis. It examines internal and external factors affecting the veracity of what experts may judge to be matters of common sense, using a semiotic structure.

Findings

In multiple domains, including management, expert accuracy is, in general, no better than chance. Increased experience, however, is often accompanied by an unjustified increase in self‐confidence.

Practical implications

While the dynamic nature of decision making in organizations renders the development of a codified, reliable knowledge base potentially unachievable, there is value in recognizing these limitations, and employing tactics to explore more thoroughly both problem and solutions spaces

Originality/value

The paper's originality lies in its integration of recent, multiple‐disciplinary research as a basis for persuading decision makers of the perils of accepting expert advice without skepticism.

Details

Management Decision, vol. 47 no. 3
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 14 September 2015

Thomas DeCarlo, Tirthankar Roy and Michael Barone

The purpose of this study is to examine how trends in historical data influence two types of predictive judgments: territory selection and salesperson hiring. Sales managers are…

1738

Abstract

Purpose

The purpose of this study is to examine how trends in historical data influence two types of predictive judgments: territory selection and salesperson hiring. Sales managers are confronted frequently with decisions that explicitly or implicitly involve forecasting with limited information. In doing so, they conceptualize how the magnitude of these trend effects may be affected by the experience managers have in making these types of judgments. Study 1 provides evidence of a curvilinear relationship between experience and reliance on the trend data whereby the sales territory selections of novice sales managers exhibited greater susceptibility to informational trends than did the evaluations of naïve and expert decision-makers. A benchmark analysis in Study 2 further revealed that the salesperson selections made by novice and expert sales managers were equally biased, albeit in opposite directions, with novices overweighting and experts underweighting historical performance trends. Implications of these findings are discussed, as are avenues for future research.

Design/methodology/approach

The authors employ an online experimental design methodology of practicing managers. For Study 1, they use regression, whereas Study 2 uses a deterministic process to develop a priori predictive benchmark forecasts. Ordinary least squares is then used to estimate manager’s decisions, which are then compared to the predictive forecasts to determine accuracy.

Findings

Study 1 provides evidence of a curvilinear relationship between experience and reliance on the trend data whereby the sales territory selections of novice sales managers exhibited greater susceptibility to informational trends than did the evaluations of naïve and expert decision-makers. A benchmark analysis in Study 2 further revealed that the salesperson selections made by novice and expert sales managers were equally biased, albeit in opposite directions, with novices overweighting and experts underweighting historical performance trends.

Originality/value

The present inquiry is the first to provide insights into an important issue that has been the subject of equivocal findings, namely, whether experience in a judgmental domain exerts a facilitating or debilitating effect on sales manager decision-making. In this regard, some research supports the intuition that experience in making a particular type of decision can insulate managers from judgmental bias and, in doing so, improve decision quality (see Shanteau, [1992a] for a summary). In contrast, other work provides a more pessimistic view by demonstrating that the quality of decision-making is either unaffected by or can erode with additional experience (Hutchinson et al., 2010). To help reconcile these conflicting findings, the authors presented and tested a theoretical framework conceptualizing how trends may influence predictive judgments across three levels of decision-maker experience.

Details

European Journal of Marketing, vol. 49 no. 9/10
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 27 July 2012

Shi‐Woei Lin and Ming‐Tsang Lu

Methods and techniques of aggregating preferences or priorities in the analytic hierarchy process (AHP) usually ignore variation or dispersion among experts and are vulnerable to…

Abstract

Purpose

Methods and techniques of aggregating preferences or priorities in the analytic hierarchy process (AHP) usually ignore variation or dispersion among experts and are vulnerable to extreme values (generated by particular viewpoints or experts trying to distort the final ranking). The purpose of this paper is to propose a modelling approach and a graphical representation to characterize inconsistency and disagreement in the group decision making in the AHP.

Design/methodology/approach

The authors apply a regression approach for estimating the decision weights of the AHP using linear mixed models (LMM). They also test the linear mixed model and the multi‐dimensional scaling graphical display using a case of strategic performance management in education.

Findings

In addition to determining the weight vectors, this model also allows the authors to decompose the variation or uncertainty in experts' judgment. Well‐known statistical theories can estimate and rigorously test disagreement among experts, the residual uncertainty due to rounding errors in AHP scale, and the inconsistency within individual experts' judgments. Other than characterizing different sources of uncertainty, this model allows the authors to rigorously test other factors that might significantly affect weight assessments.

Originality/value

This study provides a model to better characterize different sources of uncertainty. This approach can improve decision quality by allowing analysts to view the aggregated judgments in a proper context and pinpoint the uncertain component that significantly affects decisions.

Article
Publication date: 18 April 2008

Asokan Anandarajan, Gary Kleinman and Dan Palmon

Prior literature provides clear evidence that the judgments of experts differ from those of non‐experts. For example, Smith and Kida concluded that the extent of common biases…

2481

Abstract

Purpose

Prior literature provides clear evidence that the judgments of experts differ from those of non‐experts. For example, Smith and Kida concluded that the extent of common biases that they investigated often are reduced when experts perform job related tasks as compared to students. The aim in this theoretical study is to examine whether “heuristic biases significantly moderate the understanding of experts versus novices in the going concern judgment?”

Design/methodology/approach

The authors address the posited question by marshalling extant literature on expert and novice judgments and link these to concepts drawn from the cognitive sciences through the Brunswick Lens Model.

Findings

The authors identify a number of heuristics that may bias the going concern decision, based on the work of Kahneman and Tversky among others. They conclude that experience mitigates the unintentional consequences played by heuristic biases.

Practical implications

The conclusions have implications for the education and training of auditors, and for the expectation gap. They suggest that both awareness of factors that affect understanding of auditing reports and greater attention to training are important in reducing the expectation gap.

Originality/value

This paper develops additional theoretical understanding of factors that may impact the expectation gap. While there has been limited prior discussion of the impact of cognitive factors on differences between experts and novices, the paper significantly expands the range of factors discussed. As such, it should provide a stimulus to new research in this important area.

Details

Managerial Auditing Journal, vol. 23 no. 4
Type: Research Article
ISSN: 0268-6902

Keywords

Article
Publication date: 13 June 2019

Jiří Šindelář and Martin Svoboda

This paper aims to deal with expert judgment and its predictive ability in the context of investment funds. The judgmental ratings awarder with a large set of experts was compared…

Abstract

Purpose

This paper aims to deal with expert judgment and its predictive ability in the context of investment funds. The judgmental ratings awarder with a large set of experts was compared to a sample of the dynamic investment funds operating in Central and Eastern Europe with their objective performance, both past and future, relatively to the time of the forecast.

Design/methodology/approach

Data on the survey sample enabled the authors to evaluate both ex post judgmental validity, i.e. how the experts reflected the previous performance of funds, and ex ante predictive accuracy, i.e. how well their judgments estimated the future performance of the fund. For this purpose, logistic regression for past values estimations and linear model for future values estimations was used.

Findings

It was found that the experts (independent academicians, senior bank specialists and senior financial advisors) were only able to successfully reflect past annual returns of a five-year period, failing to reflect costs and annual volatility and, mainly, failing to predict any of the indicators on the same five-year horizon.

Practical implications

The outcomes of this paper confirm that expert judgment should be used with caution in the context of financial markets and mainly in situations when domain knowledge is applicable. Procedures incorporating judgmental evaluations, such as individual investment advice, should be thoroughly reviewed in terms of client value-added, to eliminate potential anchoring bias.

Originality/value

This paper sheds new light on the quality and nature of individual judgment produced by financial experts. These are prevalent in many situations influencing clients’ decision-making, be it financial advice or multiple product contests. As such, our findings underline the need of scepticism when these judgments are taken into account.

Details

foresight, vol. 21 no. 4
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 16 March 2012

Shi‐Woei Lin and Ssu‐Wei Huang

The purpose of this paper is to investigate how expert overconfidence and dependence affect the calibration of aggregated probability judgments obtained by various linear…

Abstract

Purpose

The purpose of this paper is to investigate how expert overconfidence and dependence affect the calibration of aggregated probability judgments obtained by various linear opinion‐pooling models.

Design/methodology/approach

The authors used a large database containing real‐world expert judgments, and adopted the leave‐one‐out cross‐validation technique to test the calibration of aggregated judgments obtained by Cooke's classical model, the equal‐weight linear pooling method, and the best‐expert approach. Additionally, the significance of the effects using linear models was rigorously tested.

Findings

Significant differences were found between methods. Both linear‐pooling aggregation approaches significantly outperformed the best‐expert technique, indicating the need for inputs from multiple experts. The significant overconfidence effect suggests that linear pooling approaches do not effectively counteract the effect of expert overconfidence. Furthermore, the second‐order interaction between aggregation method and expert dependence shows that Cooke's classical model is more sensitive to expert dependence than equal weights, with high dependence generally leading to much poorer aggregated results; by contrast, the equal‐weight approach is more robust under different dependence levels.

Research limitations/implications

The results suggest that methods involving broadening of subjective confidence intervals or distributions may occasionally be useful for mitigating the overconfidence problem. An equal‐weight approach might be more favorable when the level of dependence between experts is high. Although it was found that the number of experts and the number of seed questions also significantly affect the calibration of the aggregated distribution, further research to find the minimum number of questions or experts is required to ensure satisfactory aggregated performance would be desirable. Furthermore, other metrics or probability scoring rules should be used to check the robustness and generalizability of the authors' conclusion.

Originality/value

The paper provides empirical evidence of critical factors affecting the calibration of the aggregated intervals or distribution judgments obtained by linear opinion‐pooling methods.

Details

Journal of Modelling in Management, vol. 7 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 1 June 2003

Ralf Hansmann, Harald A. Mieg, Helmut W. Crott and Roland W. Scholz

This paper includes three analyses concerning: expert support in the selection of impact variables for scientific models relevant to environmental planning, the quality of…

Abstract

This paper includes three analyses concerning: expert support in the selection of impact variables for scientific models relevant to environmental planning, the quality of students’ individual estimates of corresponding impacts before and after a group discussion, and the accuracy of artificially‐aggregated judgments of independent groups. Participants were students of environmental sciences at ETH Zurich. The first analysis revealed that during participation in an environmental case study, students’ individual estimates of impacts of variables which have been suggested by experts increased, as compared to the estimates of impacts of additional variables, which have been selected by the students. The remaining analyses consider group discussions on the strength of particular environmental impacts. The quality of the estimates was analyzed referring to expert estimates of the impacts.

Details

International Journal of Sustainability in Higher Education, vol. 4 no. 2
Type: Research Article
ISSN: 1467-6370

Keywords

Article
Publication date: 1 April 1997

HASHEM AL‐TABTABAI, NABIL KARTAM, IAN FLOOD and ALEX P. ALEX

Construction projects are susceptible to cost and time overruns. Variations from planned schedule and cost estimates can result in huge losses for owners and contractors. In…

Abstract

Construction projects are susceptible to cost and time overruns. Variations from planned schedule and cost estimates can result in huge losses for owners and contractors. In extreme cases, the viability of the project itself is jeopardised as a result of variations from baseline plans. Hence new methods and techniques which assist project managers in forecasting the expected variance in schedule and cost should be developed. This paper proposes a judgment‐based forecasting approach which will identify schedule variances from a baseline plan for typical construction projects. The proposed forecasting approach adopts multiple regression techniques and further utilises neural networks to capture the decision‐making procedure of project experts involved in schedule monitoring and prediction. The models developed were applied to a multistorey building project under construction and were found feasible for use in similar construction projects. The advantages and limitations of these two modelling process for prediction of schedule variance are discussed. The developed models were integrated with existing project management computer systems for the convenient and realistic generation of revised schedules at appropriate junctures during the progress of the project.

Details

Engineering, Construction and Architectural Management, vol. 4 no. 4
Type: Research Article
ISSN: 0969-9988

Keywords

1 – 10 of over 28000