Search results

1 – 10 of over 84000
To view the access options for this content please click here
Book part
Publication date: 19 June 2012

Daniela Ruggeri

Purpose – Accounting research has long shown the effect of subjectivity in performance evaluation. This study investigates one form of subjectivity in performance…

Abstract

Purpose – Accounting research has long shown the effect of subjectivity in performance evaluation. This study investigates one form of subjectivity in performance evaluation: flexibility in weighting performance measures examining decisions made by supervisors about weighting. Empirical studies show that the performance-measure weights are only partially consistent with the predictions of the agency theory and they are a still outstanding issue.

Methodology/approach – We develop an experiment to analyse supervisor decision-making, manipulating two factors: internal organisational interdependence and the level of managerial performance. We derive hypotheses along with both economic and behavioural approaches. The economic approach is based on agency theory predictions and the controllability principle while the behavioural approach is drawn on the organisational justice theory. We argue that in low interdependence contexts the supervisor's decision confirms the agency theory predictions, while in high interdependence conditions weighting decisions could be driven by behavioural considerations of fairness perceptions of the evaluation process and the level of managerial performance.

Findings – We find that in low interdependence contexts the supervisor's decision confirms the agency theory predictions, while in high interdependence contexts it does not. The results indicate that the supervisor's decision stems from the integration of economic and behavioural perspectives.

Research and social implications – The theoretical framework can be useful for interpreting the supervisor decision-making and the weighting process.

Originality – The economic and behavioural approaches allow us to understand flexibility in weighting performance measures suggesting that, in addition to economic considerations, a behavioural perspective may also be relevant in explaining subjective weighting.

Details

Performance Measurement and Management Control: Global Issues
Type: Book
ISBN: 978-1-78052-910-3

To view the access options for this content please click here
Article
Publication date: 19 April 2011

Shihchieh Chou, Chinyi Cheng and Szujui Huang

The purpose of this paper is to establish a new approach for solving the expansion term problem.

Abstract

Purpose

The purpose of this paper is to establish a new approach for solving the expansion term problem.

Design/methodology/approach

This study develops an expansion term weighting function derived from the valuable concepts used by previous approaches. These concepts include probability measurement, adjustment according to situations, and summation of weights. Formal tests have been conducted to compare the proposed weighting function with the baseline ranking model and other weighting functions.

Findings

The results reveal stable performance by the proposed expansion term weighting function. It proves more effective than the baseline ranking model and outperforms other weighting functions.

Research limitations/implications

The paper finds that testing additional data sets and potential applications to real working situations is required before the generalisability and superiority of the proposed expansion term weighting function can be asserted.

Originality/value

Stable performance and an acceptable level of effectiveness for the proposed expansion term weighting function indicate the potential for further study and development of this approach. This would add to the current methods studied by the information retrieval community for culling information from documents.

Details

Online Information Review, vol. 35 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 3 December 2018

Selcuk Cebi and Cengiz Kahraman

The purpose of this paper is to propose a novel weighting algorithm for fuzzy information axiom (IA) and to apply it to the evaluation process of 3D printers.

Abstract

Purpose

The purpose of this paper is to propose a novel weighting algorithm for fuzzy information axiom (IA) and to apply it to the evaluation process of 3D printers.

Design/methodology/approach

As a decision-making tool, IA method is presented to evaluate the performance of any design. Then, weighted IA methods are investigated and a new weighting procedure is introduced to the literature. Then, the existing axiomatic design methods and the proposed new method are classified into two groups: weighting based on information content and weighting based on design ranges. The weighting based on information content approach consists of four methods including pessimistic and optimistic approaches. The philosophy of the weighting based on design ranges is to narrow design ranges in order to decrease fuzziness in the model. To prove the robustness and the performance of the proposed weighting method, the results are compared with the existing methods in the literature. Then, the new approach is applied to evaluate 3D printers.

Findings

The results of the proposed study show that the proposed weighting algorithm has better performance than the old ones for IA. Therefore, the proposed weighting algorithm should be used for the weighting tool of IA thereafter.

Originality/value

An effective weighting method compatible with the philosophy of IA method has been proposed. Furthermore, the performances of 3D printers are compared by using the proposed method.

Details

Journal of Enterprise Information Management, vol. 32 no. 1
Type: Research Article
ISSN: 1741-0398

Keywords

To view the access options for this content please click here
Article
Publication date: 14 September 2010

Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou and Dimitrios Zevgolis

Evaluation of energy and climate policy interactions is a complex issue, whereas stakeholders' preferences incorporation has not been addressed systematically. The purpose…

Abstract

Purpose

Evaluation of energy and climate policy interactions is a complex issue, whereas stakeholders' preferences incorporation has not been addressed systematically. The purpose of this paper is to present an integrated weighting methodology that has been developed in order to incorporate weighting preferences into an ex ante evaluation of climate and energy policy interactions.

Design/methodology/approach

A multi‐criteria analysis (MCA) weighting methodology which combines pair‐wise comparisons and ratio importance weighting methods has been elaborated. It initially introduces the users to the evaluation process through a warming up holistic approach for an initial rank of the criteria and then facilitates them to express their ratio relative importance in pair‐wise comparisons of criteria by providing them an interactive mean with verbal, numerical and visual representation of their preferences. Moreover, it provides a ranking consistency test where users can see the degree of (in)consistency of their preferences.

Findings

Stakeholders and experts in the energy policy field who tested the methodology stated their approval and satisfaction for the combination of both ranking and pair‐wise comparison techniques, since it allows the gradual approach to the evaluation problem. In addition, main difficulties in MCA weights elicitation processes were overcome.

Research limitations/implications

The methodology is tested by a small sample of stakeholders, whereas a larger sample, a broader range of stakeholders and applications on different climate policy evaluation cases merit further research.

Originality/value

The novel aspect of the developed methodology consists of the combination of ranking and pair‐wise comparison techniques for the elicitation of stakeholders' preferences.

Details

International Journal of Energy Sector Management, vol. 4 no. 3
Type: Research Article
ISSN: 1750-6220

Keywords

To view the access options for this content please click here
Article
Publication date: 28 August 2008

May H. Lo and Le (Emily) Xu

The purpose of this study is to examine whether financial analysts mislead investors in recognizing the differential persistence of the three cash flow components of…

Abstract

Purpose

The purpose of this study is to examine whether financial analysts mislead investors in recognizing the differential persistence of the three cash flow components of earnings, defined by Dechow et al., in forecasting annual earnings.

Design/methodology/approach

The paper uses Mishkin's econometric approach to compare the persistence of the cash flow components within and across the historical, analysts' and investors' weightings.

Findings

It is found that financial analysts' weightings of the cash flow components are more closely aligned with the historical relations than are investors' weightings, both in direction and in magnitude. The degree of analysts' mis‐weighting is economically small and much lower than the degree of investors' mis‐weighting. Moreover, the extent of both investors' and analysts' mis‐weightings of the cash components is generally smaller for firms with greater levels of analyst following, a proxy for the quality of the information environment.

Research limitations/implications

The findings suggest that financial analysts' bias in weighting the cash components of earnings is at best a partial explanation for investors' bias.

Practical implications

This study is important to academics and the investment community that relies upon financial analysts as information intermediaries, because the ability of analysts to incorporate value‐relevant information in their published expectations may impact securities prices.

Originality/value

The study is the first to document the weightings of the cash components of earnings by financial analysts. In addition, this paper provides evidence that financial analysts, as information intermediaries, are less biased than investors in processing not only the accrual but also the cash components of earnings.

Details

Accounting Research Journal, vol. 21 no. 1
Type: Research Article
ISSN: 1030-9616

Keywords

To view the access options for this content please click here
Article
Publication date: 30 October 2020

Adrija Majumdar and Arnab Adhikari

In the context of sharing economy, the superhost program of Airbnb emerges as a phenomenal success story that has transformed the tourism industry and garnered humongous…

Abstract

Purpose

In the context of sharing economy, the superhost program of Airbnb emerges as a phenomenal success story that has transformed the tourism industry and garnered humongous popularity. Proper performance evaluation and classification of the superhosts are crucial to incentivize superhosts to maintain higher service quality. The main objective of this paper is to design an integrated multicriteria decision-making (MCDM) method-based performance evaluation and classification framework for the superhosts of Airbnb and to study the variation in various contextual factors such as price, number of listings and cancelation policy across the superhosts.

Design/methodology/approach

This work considers three weighting techniques, mean, entropy and CRITIC-based methods to determine the weights of factors. For each of the weighting techniques, an integrated TOPSIS-MOORA-based performance evaluation method and classification framework have been developed. The proposed methodology has been applied for the performance evaluation of the superhosts (7,308) of New York City using real data from Airbnb.

Findings

From the perspective of performance evaluation, the importance of devising an integrated methodology instead of adopting a single approach has been highlighted using a nonparametric Wilcoxon signed-rank test. As per the context-specific findings, it has been observed that the price and the number of listings are the highest for the superhosts in the topmost category.

Practical implications

The proposed methodology facilitates the design of a leaderboard to motivate service providers to perform better. Also, it can be applicable in other accommodation-sharing economy platforms and ride-sharing platforms.

Originality/value

This is the first work that proposes a performance evaluation and classification framework for the service providers of the sharing economy in the context of tourism industry.

Details

Benchmarking: An International Journal, vol. 28 no. 2
Type: Research Article
ISSN: 1463-5771

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 1979

KAREN SPARCK JONES

Previous experiments demonstrated the value of relevance weighting for search terms, but relied on substantial relevance information for the terms. The present experiments…

Abstract

Previous experiments demonstrated the value of relevance weighting for search terms, but relied on substantial relevance information for the terms. The present experiments were designed to study the effects of weights based on very limited relevance information, for example supplied by one or two relevant documents. The tests simulated iterative searching, as in an on‐line system, and show that even very little relevance information can be of considerable value.

Details

Journal of Documentation, vol. 35 no. 1
Type: Research Article
ISSN: 0022-0418

To view the access options for this content please click here

Abstract

Details

Travel Survey Methods
Type: Book
ISBN: 978-0-08-044662-2

To view the access options for this content please click here
Article
Publication date: 8 April 2021

Mariem Bounabi, Karim Elmoutaouakil and Khalid Satori

This paper aims to present a new term weighting approach for text classification as a text mining task. The original method, neutrosophic term frequency – inverse term…

Abstract

Purpose

This paper aims to present a new term weighting approach for text classification as a text mining task. The original method, neutrosophic term frequency – inverse term frequency (NTF-IDF), is an extended version of the popular fuzzy TF-IDF (FTF-IDF) and uses the neutrosophic reasoning to analyze and generate weights for terms in natural languages. The paper also propose a comparative study between the popular FTF-IDF and NTF-IDF and their impacts on different machine learning (ML) classifiers for document categorization goals.

Design/methodology/approach

After preprocessing textual data, the original Neutrosophic TF-IDF applies the neutrosophic inference system (NIS) to produce weights for terms representing a document. Using the local frequency TF, global frequency IDF and text N's length as NIS inputs, this study generate two neutrosophic weights for a given term. The first measure provides information on the relevance degree for a word, and the second one represents their ambiguity degree. Next, the Zhang combination function is applied to combine neutrosophic weights outputs and present the final term weight, inserted in the document's representative vector. To analyze the NTF-IDF impact on the classification phase, this study uses a set of ML algorithms.

Findings

Practicing the neutrosophic logic (NL) characteristics, the authors have been able to study the ambiguity of the terms and their degree of relevance to represent a document. NL's choice has proven its effectiveness in defining significant text vectorization weights, especially for text classification tasks. The experimentation part demonstrates that the new method positively impacts the categorization. Moreover, the adopted system's recognition rate is higher than 91%, an accuracy score not attained using the FTF-IDF. Also, using benchmarked data sets, in different text mining fields, and many ML classifiers, i.e. SVM and Feed-Forward Network, and applying the proposed term scores NTF-IDF improves the accuracy by 10%.

Originality/value

The novelty of this paper lies in two aspects. First, a new term weighting method, which uses the term frequencies as components to define the relevance and the ambiguity of term; second, the application of NL to infer weights is considered as an original model in this paper, which also aims to correct the shortcomings of the FTF-IDF which uses fuzzy logic and its drawbacks. The introduced technique was combined with different ML models to improve the accuracy and relevance of the obtained feature vectors to fed the classification mechanism.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Book part
Publication date: 26 February 2016

Noel Cassar and Simon Grima

The recent development of the European debt sovereign crisis showed that sovereign debt is not “risk free.” The traditional index bond management used during the last two…

Abstract

Introduction

The recent development of the European debt sovereign crisis showed that sovereign debt is not “risk free.” The traditional index bond management used during the last two decades such as the market-capitalization weighting scheme has been severely called into question. In order to overcome these drawbacks, alternative weighting schemes have recently prompted attention, both from academic researchers and from market practitioners. One of the key developments was the introduction of passive funds using economic fundamental indicators.

Purpose

In this chapter, the authors introduced models with economic drivers with an aim of investigating whether the fundamental approaches outperformed the other models on risk-adjusted returns and on other terms.

Methodology

The authors did this by constructing five portfolios composed of the Eurozone sovereigns bonds. The models are the Market-Capitalization RP, GDP model RP, Ratings RP model, Fundamental-Ranking RP, and Fundamental-Weighted RP models. These models were created exclusively for this chapter. Both Fundamental models are using a range of 10 country fundamentals. A variation from other studies is that this dissertation applied the risk parity concept which is an allocation technique that aims to equalize risk across different assets. This concept has been applied by assuming the credit default swap as proxy for sovereign credit risk. The models were run using the Generalized Reduced Gradient (GRG) method as the optimization model, together with the Lagrange Multipliers as techniques and the Karush–Kuhn–Tucker conditions. This led to the comparison of all the models mentioned above in terms of performance, risk-adjusted returns, concentration, and weighted average ratings.

Findings

By analyzing the whole period between 2006 and 2014, it was found that both the fundamental models gave very appealing results in terms of risk-adjusted returns. The best results were returned by the Fundamental-Ranking RP model followed by the Fundamental-Weighting RP model. However, better results for the mixed performance and risk-adjusted returns were achieved on a yearly basis and when sub-dividing the whole period in three equal periods. Moreover, the authors concluded that over the long term, the fundamental bond indexing triumphed over the other approaches by offering superior return and risk characteristics. Thus, one can use the fundamental indexation as an alternative to other traditional models.

Details

Contemporary Issues in Bank Financial Management
Type: Book
ISBN: 978-1-78635-000-8

Keywords

1 – 10 of over 84000