Search results
1 – 10 of over 40000Carsten Lausberg and Patrick Krieger
Scoring is a widely used, long-established, and universally applicable method of measuring risks, especially those that are difficult to quantify. Unfortunately, the scoring…
Abstract
Purpose
Scoring is a widely used, long-established, and universally applicable method of measuring risks, especially those that are difficult to quantify. Unfortunately, the scoring method is often misused in real estate practice and underestimated in academia. The purpose of this paper is to supplement the literature with general rules under which scoring systems should be designed and validated, so that they can become reliable risk instruments.
Design/methodology/approach
The paper combines the rules, or axioms, for coherent risk measures known from the literature with those for scoring instruments. The result is a system of rules that a risk scoring system should fulfil. The approach is theoretical, based on a literature survey and reasoning.
Findings
At first, the paper clarifies that a risk score should express the variation of a property’s yield and not of its quality, as it is often done in practice. Then the axioms for a coherent risk scoring are derived, e.g. the independence of the risk factors. Finally, the paper proposes procedures for valid and reliable risk scoring systems, e.g. the out-of-time validation.
Practical implications
Although it is a theoretical work, the paper also focuses on practical applicability. The findings are illustrated with examples of scoring systems.
Originality/value
Rules for risk measures and for scoring systems have been established long ago, but the combination is a first. In this way, the paper contributes to real estate risk research and risk management practice.
Details
Keywords
A gradual change on how to evaluate successful procurement, in both the private and the public sector has occurred in recent years. Indeed, in so far as economic efficiency is…
Abstract
A gradual change on how to evaluate successful procurement, in both the private and the public sector has occurred in recent years. Indeed, in so far as economic efficiency is concerned from a price-only criterion for measuring success, decisions have shifted to a multi-criteria approach where various dimensions of quality, as well as price, are considered. The most common way to express such a shift is to say that procurement should deliver “best value for money” (BVM). That is, to award the contract, both monetary and non-monetary components of an offer are to be considered. Whether in competitive bidding or negotiations, BVM is typically formalized by a scoring formula, namely a rule for assigning dimensionless numbers to different elements of an offer, often expressed in different units of measurement. The contract would then be awarded according to the total score obtained by a bid. The main goal of this paper is to present a critical overview of some main themes related to the notion of BVM, discussing few typical forms of scoring rules as a way to formalize the procurer's preferences.
Michael Geis and Martin Middendorf
The purpose of this paper is to propose an algorithm that is based on the ant colony optimization (ACO) metaheuristic for producing harmonized melodies. ACO is a nature inspired…
Abstract
Purpose
The purpose of this paper is to propose an algorithm that is based on the ant colony optimization (ACO) metaheuristic for producing harmonized melodies. ACO is a nature inspired metaheuristic where a colony of ants searches for an optimum of a function. The algorithm works in two stages. In the first stage it creates a melody. The obtained melody is then harmonized according to the rules of baroque harmony in the second stage. A multi‐objective version of the algorithm is also proposed, where each tier is optimized as a separate objective.
Design/methodology/approach
The ACO metaheuristic is adapted to graphs representing notes and chords. Desirability of a sequence of notes is measured by conformance to compositional rules. The fitness of a melody is evaluated with five equally weighted rules governing smoothness of the melody curve, its contour, tendency tone resolution, tone colors and the pitch of the final note. Harmonization is guided by six rules, grouped into three tiers of two rules each. These rules cover chord arrangement, voice distance, voice leading, harmonic progression, smoothness, and chord resolution. Rules of a tier do not score unless those of the previous tier yield high values.
Findings
The proposed algorithm improves on the only other existing musical ACO by adding the notion of harmony and by evolving voices codependently. The output is comparable to different types of other existing algorithms (genetic algorithm, rule‐based search algorithm) in the field. The multi‐objective variant significantly enhances solution quality and convergence speed, which makes extensions of the system for real time performance realistic.
Originality/value
This algorithm is the first ACO algorithm proposed for the problem of melody creation and harmonization.
Details
Keywords
The scoring system for the National Board for Professional Teaching Standards (NBPTS) assessments was a groundbreaking undertaking that brought with it a host of unanticipated…
Abstract
The scoring system for the National Board for Professional Teaching Standards (NBPTS) assessments was a groundbreaking undertaking that brought with it a host of unanticipated challenges. These, in turn, generated a complete revision of the approach to scoring and the design underwent a number of changes during the first decade. Beginning with an analytical model which was so ambitious that it was entirely too cumbersome and complex to be undertaken within a reasonable timeframe, assessment developers had to systematically redesign a scoring system that would be at once reliable, valid, and operationally feasible.
Roger A. Kerin and Michael G. Harvey
The term “strategic thinking” is a relatively recent addition to the lexicon of marketing concepts. Its popularity arises from increasing discontent with highly formalized…
Abstract
The term “strategic thinking” is a relatively recent addition to the lexicon of marketing concepts. Its popularity arises from increasing discontent with highly formalized marketing planning approaches that replace creativity with paperwork and Jock executives into a dangerously predictable repertoire of strategic options. Despite the frequent call for strategic thinking to augment the marketing planning process, there is woefully little written on the subject. It would seem that the admonition to THINK emphasized by the late Thomas Watson at IBM is not enough. Rather, strategic thinking requires a perspective on what to think about. The properties of games, which we will describe, provide a valuable insight into what an executive should consider when asked to think strategically regarding a marketing problem or opportunity. These properties form the basis for the game theory approaches in decision analysis where mathematics is the dominant feature. Unfortunately, the impenetrable language of mathematics has obscured the fundamental properties of games so that marketing executives cannot readily use them in a corporate setting. We will look here at these fundamental game properties and see what insights they offer for strategic marketing thinking and formulating competitive strategy.
Shi‐Woei Lin and Ssu‐Wei Huang
The purpose of this paper is to investigate how expert overconfidence and dependence affect the calibration of aggregated probability judgments obtained by various linear…
Abstract
Purpose
The purpose of this paper is to investigate how expert overconfidence and dependence affect the calibration of aggregated probability judgments obtained by various linear opinion‐pooling models.
Design/methodology/approach
The authors used a large database containing real‐world expert judgments, and adopted the leave‐one‐out cross‐validation technique to test the calibration of aggregated judgments obtained by Cooke's classical model, the equal‐weight linear pooling method, and the best‐expert approach. Additionally, the significance of the effects using linear models was rigorously tested.
Findings
Significant differences were found between methods. Both linear‐pooling aggregation approaches significantly outperformed the best‐expert technique, indicating the need for inputs from multiple experts. The significant overconfidence effect suggests that linear pooling approaches do not effectively counteract the effect of expert overconfidence. Furthermore, the second‐order interaction between aggregation method and expert dependence shows that Cooke's classical model is more sensitive to expert dependence than equal weights, with high dependence generally leading to much poorer aggregated results; by contrast, the equal‐weight approach is more robust under different dependence levels.
Research limitations/implications
The results suggest that methods involving broadening of subjective confidence intervals or distributions may occasionally be useful for mitigating the overconfidence problem. An equal‐weight approach might be more favorable when the level of dependence between experts is high. Although it was found that the number of experts and the number of seed questions also significantly affect the calibration of the aggregated distribution, further research to find the minimum number of questions or experts is required to ensure satisfactory aggregated performance would be desirable. Furthermore, other metrics or probability scoring rules should be used to check the robustness and generalizability of the authors' conclusion.
Originality/value
The paper provides empirical evidence of critical factors affecting the calibration of the aggregated intervals or distribution judgments obtained by linear opinion‐pooling methods.
Details
Keywords
Gian Luigi Albano and Maria Grazia Santocchia
The aim of this case study is to review the in-depth (and successful) investigation carried out in 2016 by the Italian Competition Authority [Autorità Garante della Concorrenza e…
Abstract
Purpose
The aim of this case study is to review the in-depth (and successful) investigation carried out in 2016 by the Italian Competition Authority [Autorità Garante della Concorrenza e del Mercato (AGCM)] on a nation-wide (multi-lot) framework agreement for consulting services. We also critically assess the tender design and emphasize which dimensions may have facilitated the uncovered anticompetitive agreement.
Design/methodology/approach
The case study borrows from the official Antitrust Authorities’ findings and from the tender documents to paint a comprehensive picture of the cartel’s strategy.
Findings
The case study emphasizes that AGCM’s the “conjectured logic” of the cartel’s behaviour (endogenous evidence) did coincide with those pieces of evidence seized by police forces for criminal crimes at the cartel members’ premises (exogeneous evidence). This infrequent feature of bidding rings investigations underlines the importance of theoretical as well as practical analyses of cartels’ behaviour in public procurement markets.
Social implications
As the antitrust investigation was triggered by a confidential report sent by the awarding authority (Consip, the Italian national central purchasing body), the case study also emphasizes the importance of informal as well as formal co-operation between awarding authorities, especially central purchasing bodies, and competition authorities.
Originality/value
The case study belongs to a small set of applied research papers attempting at building a bridge between public procurement design, particularly of sizeable framework agreements, and the mechanisms devised by cartels to “game” procurement procedures. All this is accomplished by looking at all design dimensions that were exploited by cartel’s members.
Details
Keywords
Suparerk Lekwijit and Daricha Sutivong
Prediction markets are techniques to aggregate dispersed public opinions via market mechanisms to predict uncertain future events’ outcome. Many experiments have shown that…
Abstract
Purpose
Prediction markets are techniques to aggregate dispersed public opinions via market mechanisms to predict uncertain future events’ outcome. Many experiments have shown that prediction markets outperform other traditional forecasting methods in terms of accuracy. Logarithmic market scoring rules (LMSR) is one of the most simple and widely used market mechanisms; however, market makers have to confront crucial design decisions including the setting of the parameter “b” or the “liquidity parameter” in the price functions. As the liquidity parameter has significant effects on the market performance, this paper aims to provide a comprehensive basis for the setting of the parameter.
Design/methodology/approach
The analyses include the effects of the liquidity parameter on the forecast standard error and the amount of time for the market price to converge to the true value. These experiments use artificial prediction markets, the proposed simulation models that mimic real prediction markets.
Findings
The simulation results indicate that prediction market’s forecast standard error decreases as the value of the liquidity parameter increases. Moreover, for any given number of traders in the market, there exists an optimal liquidity parameter value that yields appropriate price adaptability and leads to the fastest price convergence.
Originality/value
Understanding these tradeoffs, the market makers can effectively determine the liquidity parameter value under various objectives on the standard error, the time to convergence and cost.
Details
Keywords
The purpose of this paper is to clarify critical issues underlying the national culture dimensions of Hofstede and GLOBE, demonstrating their irrelevance to international…
Abstract
Purpose
The purpose of this paper is to clarify critical issues underlying the national culture dimensions of Hofstede and GLOBE, demonstrating their irrelevance to international marketing decision‐making.
Design/methodology/approach
In‐depth discussion of the theoretical and empirical logic underlying the national culture dimension scales and scores.
Findings
Hofstede and GLOBE national culture scores are averages of items that are unrelated and which do not form a valid and reliable scale for the culture dimensions at the level of individuals or organizations. Hence these scores cannot be used to characterize individuals or sub‐groups within countries. The national culture dimension scores are therefore of doubtful use for marketing management that is concerned with individual‐and segment‐level consumer behavior.
Research limitations/implications
Researchers should be cautious in using the Hofstede and GLOBE national culture dimension scores for analysis at the level of individuals and organizations.
Practical implications
Hofstede and GLOBE dimension scores should not be used to infer individual/managerial and group/organizational level behavior and preferences.
Originality/value
The paper follows a recent paper in IMR which was the first to discuss the common misunderstanding of the Hofstede and GLOBE national culture scales and scores, and their misapplication at the level of individuals and organizations by scholars and practitioners. Here we further expand and clarify the issues.
Details
Keywords
This study aims to evaluate whether the Big-4’s commenting efforts influence the characteristics of Financial Accounting Standards Board’s (FASB’s) Final_Standards using the…
Abstract
Purpose
This study aims to evaluate whether the Big-4’s commenting efforts influence the characteristics of Financial Accounting Standards Board’s (FASB’s) Final_Standards using the content of their comment letters. Whether auditors lobby standard-setters to help their clients or to help themselves and whether they are successful are questions highly relevant to issues of auditor independence and audit effectiveness.
Design/methodology/approach
Based on components of Mergenthaler (2009), this study develops a rules-based continuum change score to measure how much more (less) rules-based a Final_Standard is compared to its exposure draft to evaluate the influence of the Big-4 on the FASB’s standard-setting for 63 accounting standards.
Findings
The findings show that extensive comment letters and increased uncertainty language are associated with increases in the rules-based attributes included in Final_Standards. These results suggest the Big-4 prioritize a reduction in their own litigation risk over the possible preferences of their clients for less rigid standards. Moreover, the results are consistent with their comment letters influencing the FASB’s decision to include more rules-based attributes in Final_Standards.
Originality/value
This study develops a potential proxy for audit risk by assessing the changes in the rules-based characteristics of proposed accounting standards and using the content of the comment letters to evaluate whether the Big-4 accounting firms may influence the FASB’s Final_Standards. Overall, this study provides a unique perspective on the influence of constituents on the FASB’s standard-setting.
Details