Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 2 December 2016

Juan Aparicio

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…

2387

Abstract

Purpose

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.

Design/methodology/approach

DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.

Findings

All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.

Originality/value

As we are aware, this is the first survey in this topic.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

Open Access
Article
Publication date: 21 December 2022

GyeHong Kim

This paper shows a new methodology for evaluating the value and sensitivity of autocall knock-in type equity-linked securities. While the existing evaluation methods, Monte Carlo…

566

Abstract

This paper shows a new methodology for evaluating the value and sensitivity of autocall knock-in type equity-linked securities. While the existing evaluation methods, Monte Carlo simulation and finite difference method, have limitations in underestimating the knock-in effect, which is one of the important characteristics of this type, this paper presents a precise joint probability formula for multiple autocall chances and knock-in events. Based on this, the calculation results obtained by utilizing numerical and Monte Carlo integration are presented and compared with those of existing models. The results of the proposed model show notable improvements in terms of accuracy and calculation time.

Details

Journal of Derivatives and Quantitative Studies: 선물연구, vol. 31 no. 1
Type: Research Article
ISSN: 1229-988X

Keywords

Open Access
Article
Publication date: 11 September 2024

Mengxi Yang, Jie Guo, Lei Zhu, Huijie Zhu, Xia Song, Hui Zhang and Tianxiang Xu

Objectively evaluating the fairness of the algorithm, exploring in specific scenarios combined with scenario characteristics and constructing the algorithm fairness evaluation…

Abstract

Purpose

Objectively evaluating the fairness of the algorithm, exploring in specific scenarios combined with scenario characteristics and constructing the algorithm fairness evaluation index system in specific scenarios.

Design/methodology/approach

This paper selects marketing scenarios, and in accordance with the idea of “theory construction-scene feature extraction-enterprise practice,” summarizes the definition and standard of fairness, combs the application link process of marketing algorithms and establishes the fairness evaluation index system of marketing equity allocation algorithms. Taking simulated marketing data as an example, the fairness performance of marketing algorithms in some feature areas is measured, and the effectiveness of the evaluation system proposed in this paper is verified.

Findings

The study reached the following conclusions: (1) Different fairness evaluation criteria have different emphases, and may produce different results. Therefore, different fairness definitions and standards should be selected in different fields according to the characteristics of the scene. (2) The fairness of the marketing equity distribution algorithm can be measured from three aspects: marketing coverage, marketing intensity and marketing frequency. Specifically, for the fairness of coverage, two standards of equal opportunity and different misjudgment rates are selected, and the standard of group fairness is selected for intensity and frequency. (3) For different characteristic fields, different degrees of fairness restrictions should be imposed, and the interpretation of their calculation results and the means of subsequent intervention should also be different according to the marketing objectives and industry characteristics.

Research limitations/implications

First of all, the fairness sensitivity of different feature fields is different, but this paper does not classify the importance of feature fields. In the future, we can build a classification table of sensitive attributes according to the importance of sensitive attributes to give different evaluation and protection priorities. Second, in this paper, only one set of marketing data simulation data is selected to measure the overall algorithm fairness, after which multiple sets of marketing campaigns can be measured and compared to reflect the long-term performance of marketing algorithm fairness. Third, this paper does not continue to explore interventions and measures to improve algorithmic fairness. Different feature fields should be subject to different degrees of fairness constraints, and therefore their subsequent interventions should be different, which needs to be continued to be explored in future research.

Practical implications

This paper combines the specific features of marketing scenarios and selects appropriate fairness evaluation criteria to build an index system for fairness evaluation of marketing algorithms, which provides a reference for assessing and managing the fairness of marketing algorithms.

Social implications

Algorithm governance and algorithmic fairness are very important issues in the era of artificial intelligence, and the construction of the algorithmic fairness evaluation index system in marketing scenarios in this paper lays a safe foundation for the application of AI algorithms and technologies in marketing scenarios, provides tools and means of algorithm governance and empowers the promotion of safe, efficient and orderly development of algorithms.

Originality/value

In this paper, firstly, the standards of fairness are comprehensively sorted out, and the difference between different standards and evaluation focuses is clarified, and secondly, focusing on the marketing scenario, combined with its characteristics, key fairness evaluation links are put forward, and different standards are innovatively selected to evaluate the fairness in the process of applying marketing algorithms and to build the corresponding index system, which forms the systematic fairness evaluation tool of marketing algorithms.

Details

Journal of Electronic Business & Digital Economics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-4214

Keywords

Open Access
Article
Publication date: 6 September 2022

Paul Roelofsen and Kaspar Jansen

The purpose of this study is to analyze the question “In what order of magnitude does the comfort and performance improvement lie with the use of a cooling vest for construction…

1575

Abstract

Purpose

The purpose of this study is to analyze the question “In what order of magnitude does the comfort and performance improvement lie with the use of a cooling vest for construction workers?”.

Design/methodology/approach

The use of personal cooling systems, in the form of cooling vests, is not only intended to reduce the heat load, in order to prevent disruption of the thermoregulation system of the body, but also to improve work performance. A calculation study was carried out on the basis of four validated mathematical models, namely a cooling vest model, a thermophysiological human model, a dynamic thermal sensation model and a performance loss model for construction workers.

Findings

The use of a cooling vest has a significant beneficial effect on the thermal sensation and the loss of performance, depending on the thermal load on the body.

Research limitations/implications

Each cooling vest can be characterized on the basis of the maximum cooling power (Pmax; in W/m²), the cooling capacity (Auc; in Wh/m2) and the time (tc; in minutes) after which the cooling power is negligible. In order to objectively compare cooling vests, a (preferably International and/or European) standard/guideline must be compiled to determine the cooling power and the cooling capacity of cooling vests.

Practical implications

It is recommended to implement the use of cooling vests in the construction process so that employees can use them if necessary or desired.

Social implications

Climate change, resulting in global warming, is one of the biggest problems of present times. Rising outdoor temperatures will continue in the 21st century, with a greater frequency and duration of heat waves. Some regions of the world are more affected than others. Europe is one of the regions of the world where rising global temperatures will adversely affect public health, especially that of the labor force, resulting in a decline in labor productivity. It will be clear that in many situations air conditioning is not an option because it does not provide sufficient cooling or it is a very expensive investment; for example, in the situation of construction work. In such a situation, personal cooling systems, such as cooling vests, can be an efficient and financially attractive solution to the problem of discomfort and heat stress.

Originality/value

The value of the study lies in the link between four validated mathematical models, namely a cooling vest model, a thermophysiological human model, a dynamic thermal sensation model and a performance loss model for construction workers.

Details

International Journal of Clothing Science and Technology, vol. 35 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Open Access
Article
Publication date: 25 March 2024

Florian Follert and Werner Gleißner

From the buying club’s perspective, the transfer of a player can be interpreted as an investment from which the club expects uncertain future benefits. This paper aims to develop…

2132

Abstract

Purpose

From the buying club’s perspective, the transfer of a player can be interpreted as an investment from which the club expects uncertain future benefits. This paper aims to develop a decision-oriented approach for the valuation of football players that could theoretically help clubs determine the subjective value of investing in a player to assess its potential economic advantage.

Design/methodology/approach

We build on a semi-investment-theoretical risk-value model and elaborate an approach that can be applied in imperfect markets under uncertainty. Furthermore, we illustrate the valuation process with a numerical example based on fictitious data. Due to this explicitly intended decision support, our approach differs fundamentally from a large part of the literature, which is empirically based and attempts to explain observable figures through various influencing factors.

Findings

We propose a semi-investment-theoretical valuation approach that is based on a two-step model, namely, a first valuation at the club level and a final calculation to determine the decision value for an individual player. In contrast to the previous literature, we do not rely on an econometric framework that attempts to explain observable past variables but rather present a general, forward-looking decision model that can support managers in their investment decisions.

Originality/value

This approach is the first to show managers how to make an economically rational investment decision by determining the maximum payable price. Nevertheless, there is no normative requirement for the decision-maker. The club will obviously have to supplement the calculus with nonfinancial objectives. Overall, our paper can constitute a first step toward decision-oriented player valuation and for theoretical comparison with practical investment decisions in football clubs, which obviously take into account other specific sports team decisions.

Details

Management Decision, vol. 62 no. 13
Type: Research Article
ISSN: 0025-1747

Keywords

Open Access
Article
Publication date: 7 December 2022

T.O.M. Forslund, I.A.S. Larsson, J.G.I. Hellström and T.S. Lundström

The purpose of this paper is to present a fast and bare bones implementation of a numerical method for quickly simulating turbulent thermal flows on GPUs. The work also validates…

Abstract

Purpose

The purpose of this paper is to present a fast and bare bones implementation of a numerical method for quickly simulating turbulent thermal flows on GPUs. The work also validates earlier research showing that the lattice Boltzmann method (LBM) method is suitable for complex thermal flows.

Design/methodology/approach

A dual lattice hydrodynamic (D3Q27) thermal (D3Q7) multiple-relaxation time LBM model capable of thermal DNS calculations is implemented in CUDA.

Findings

The model has the same computational performance compared to earlier publications of similar LBM solvers. The solver is validated against three benchmark cases for turbulent thermal flow with available data and is shown to be in excellent agreement.

Originality/value

The combination of a D3Q27 and D3Q7 stencil for a multiple relaxation time -LBM has, to the authors’ knowledge, not been used for simulations of thermal flows. The code is made available in a public repository under a free license.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 5
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 7 April 2020

Sugiarto Sugiarto and Suroso Suroso

This study aims to develop a high-quality impairment loss allowance model in conformity with Indonesian Financial Accounting Standards 71 (PSAK 71) that has significant…

4891

Abstract

Purpose

This study aims to develop a high-quality impairment loss allowance model in conformity with Indonesian Financial Accounting Standards 71 (PSAK 71) that has significant contribution to national interests and the banking industry.

Design/methodology/approach

The determination of the impairment loss allowance model is settled through 7 stages, using integration of some statistical methods such as Markov chain, exponential smoothing, time series analysis of behavioral inherent trends of probability of default, tail conditional expectation and Monte Carlo simulation.

Findings

The model which is developed by the authors is proven to be a high-quality and reliable model. By using the model, it can be shown that the implementation of the expected credit losses model on Indonesian Financial Accounting Standards 71 is more prudent than the implementation of the incurred loss model on Indonesian Financial Accounting Standards 55.

Research limitations/implications

Determination of defaults was based on days past due, and the analysis in this study did not touch the aspects of hedge accounting in general.

Practical implications

This developed model will contribute significantly to national interests as a source of reference for other banks operating in Indonesia in calculating impairment loss allowance (CKPN) and can be used by the Financial Services Authority of Indonesia (OJK) as a guideline in assessing the formation of impairment loss allowance for banks operating in Indonesia.

Originality/value

As so far there is not yet an available standardized model for calculating impairment loss allowance on the basis of Indonesian Financial Accounting Standards 71, the model developed by the authors will be a new breakthrough in Indonesia.

Details

Journal of Asian Business and Economic Studies, vol. 27 no. 3
Type: Research Article
ISSN: 2515-964X

Keywords

Open Access
Article
Publication date: 8 March 2023

Rianne Appel-Meulenbroek and Vitalija Danivska

Business case (BC) analyses are performed in many different business fields, to create a report on the feasibility and competitive advantage of an intervention within an existing…

2456

Abstract

Purpose

Business case (BC) analyses are performed in many different business fields, to create a report on the feasibility and competitive advantage of an intervention within an existing organisation to secure commitment from management to invest. However, most BC research papers on decisions regarding internal funding are either based on anecdotal insights, on analyses of standards from practice, or focused on very specific BC calculations for a certain project, investment or field. A clear BC process method is missing.

Design/methodology/approach

This paper aims to describe the results of a systematic literature review of 52 BC papers that report on further conceptualisation of what a BC process should behold.

Findings

Synthesis of the findings has led to a BC definition and composition of a 20 step BC process method. In addition, 29 relevant theories are identified to tackle the main challenges of BC analyses in future studies to make them more effective. This supports further theoretical development of academic BC research and provides a tool for BC processes in practice.

Originality/value

Although there is substantial scientific research on BCs, there was not much theoretical development nor a general stepwise method to perform the most optimal BC analysis.

Details

Business Process Management Journal, vol. 29 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 19 August 2022

Marco Francesco Mazzù, Angelo Baccelloni, Simona Romani and Alberto Andria

This study aims to reveal the implications that trust, as a key driver of consumer behaviour, might have on consumer acceptance of front-of-pack labels (FOPLs) and policy…

2964

Abstract

Purpose

This study aims to reveal the implications that trust, as a key driver of consumer behaviour, might have on consumer acceptance of front-of-pack labels (FOPLs) and policy effectiveness. By conducting three studies on 1956 European consumers with different levels of exposure to FOPLs, this study offers additional theoretical and experimental support through a deep investigation of the central role of trust in consumers’ decision-making towards healthier and more informed food choices.

Design/methodology/approach

Study 1 used structural equation modelling to assess whether trust is a relevant mediator of the relationship between attitude and behavioural intention (BI), thus upgrading the front-of-pack acceptance model (FOPAM); Study 2 tested the model by comparing two labels at the extremes of the current European scheme (NutrInform Battery [NiB], Nutri-Score [NS]); Study 3 assessed the effect in cases where the connection between trust and algorithms is made transparent and evaluated trust dimensions, focusing on the perception of an algorithm presence behind FOPLs information.

Findings

Study 1 strengthens the FOPAM model with the mediating role of trust in FOPLs, demonstrating a positive effect of attitude on trust and, in turn, on BI, and resulting in a higher model fit with all the significant relationships; Study 2 revealed that the relative performance of the different labels on the FOPAM can be explained by the trust dimension; Study 3, investigating the dynamics of trust in the FOPAM, revealed that the NS is less effective than the NiB on attitude, BI and trust.

Research limitations/implications

The sample was limited to Italian, French and English respondents, and two labels at the extreme of the spectrum were examined. Furthermore, the research has relevance to the issue of trust. Other moderators used in previous studies on technology acceptance model, such as actual use versus perceptual use, user experience level or type of users and type of use might be investigated.

Practical implications

The investigation of trust, with the upgrade of FOPAM, enhances understanding of consumers’ decision-making processes when aided by food labels and makes a new contribution to the European Union “Inception Impact Assessment” in preparation for the finalization of the “From-Farm-to-Fork Strategy”, providing new insights into the role of trust by assessing the relative performance of FOPLs in consumers’ acceptance of food-related information. Furthermore, this study revealed that consumers’ perception of FOPLs worsens when they realize that they are the result of an algorithmic calculation. Finally, the new FOPAM represents a reliable theoretical model for future research on FOPL.

Originality/value

This study increases the knowledge about the performance of different FOPLs on several dimensions of food decision-making, positions the upgraded FOPAM as a valid alternative to existing theoretical models to assess the relative performance of labels, also extending the literature in the context of algorithm-based FOPL, and could be used as a valid support to policymakers and industry experts in their decision towards a unified label at European level.

Open Access
Article
Publication date: 20 February 2023

Nuh Keleş

This study aims to apply new modifications by changing the nonlinear logarithmic calculation steps in the method based on the removal effects of criteria (MEREC) method. Geometric…

1248

Abstract

Purpose

This study aims to apply new modifications by changing the nonlinear logarithmic calculation steps in the method based on the removal effects of criteria (MEREC) method. Geometric and harmonic mean from multiplicative functions is used for the modifications made while extracting the effects of the criteria on the overall performance one by one. Instead of the nonlinear logarithmic measure used in the MEREC method, it is desired to obtain results that are closer to the mean and have a lower standard deviation.

Design/methodology/approach

The MEREC method is based on the removal effects of the criteria on the overall performance. The method uses a logarithmic measure with a nonlinear function. MEREC-G using geometric mean and MEREC-H using harmonic mean are introduced in this study. The authors compared the MEREC method, its modifications and some other objective weight determination methods.

Findings

MEREC-G and MEREC-H variants, which are modifications of the MEREC method, are shown to be effective in determining the objective weights of the criteria. Findings of the MEREC-G and MEREC-H variants are more convenient, simpler, more reasonable, closer to the mean and have fewer deviations. It was determined that the MEREC-G variant gave more compatible findings with the entropy method.

Practical implications

Decision-making can occur at any time in any area of life. There are various criteria and alternatives for decision-making. In multi-criteria decision-making (MCDM) models, it is a very important distinction to determine the criteria weights for the selection/ranking of the alternatives. The MEREC method can be used to find more reasonable or average results than other weight determination methods such as entropy. It can be expected that the MEREC method will be more used in daily life problems and various areas.

Originality/value

Objective weight determination methods evaluate the weights of the criteria according to the scores of the determined alternatives. In this study, the MEREC method, which is an objective weight determination method, has been expanded. Although a nonlinear measurement model is used in the literature, the contribution was made in this study by using multiplicative functions. As an important originality, the authors demonstrated the effect of removing criteria in the MEREC method in a sensitivity analysis by actually removing the alternatives one by one from the model.

Details

International Journal of Industrial Engineering and Operations Management, vol. 5 no. 3
Type: Research Article
ISSN: 2690-6090

Keywords

1 – 10 of over 1000