Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 2 December 2016

Juan Aparicio

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…

2161

Abstract

Purpose

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.

Design/methodology/approach

DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.

Findings

All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.

Originality/value

As we are aware, this is the first survey in this topic.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

Open Access
Article
Publication date: 21 December 2022

GyeHong Kim

This paper shows a new methodology for evaluating the value and sensitivity of autocall knock-in type equity-linked securities. While the existing evaluation methods, Monte Carlo…

467

Abstract

This paper shows a new methodology for evaluating the value and sensitivity of autocall knock-in type equity-linked securities. While the existing evaluation methods, Monte Carlo simulation and finite difference method, have limitations in underestimating the knock-in effect, which is one of the important characteristics of this type, this paper presents a precise joint probability formula for multiple autocall chances and knock-in events. Based on this, the calculation results obtained by utilizing numerical and Monte Carlo integration are presented and compared with those of existing models. The results of the proposed model show notable improvements in terms of accuracy and calculation time.

Details

Journal of Derivatives and Quantitative Studies: 선물연구, vol. 31 no. 1
Type: Research Article
ISSN: 1229-988X

Keywords

Content available
Article
Publication date: 23 October 2023

Adam Biggs and Joseph Hamilton

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle…

Abstract

Purpose

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle differences can be challenging. For example, is a speed difference of 300 milliseconds more important than a 10% accuracy difference on the same drill? Marksmanship evaluations must have objective methods to differentiate between critical factors while maintaining a holistic view of human performance.

Design/methodology/approach

Monte Carlo simulations are one method to circumvent speed/accuracy trade-offs within marksmanship evaluations. They can accommodate both speed and accuracy implications simultaneously without needing to hold one constant for the sake of the other. Moreover, Monte Carlo simulations can incorporate variability as a key element of performance. This approach thus allows analysts to determine consistency of performance expectations when projecting future outcomes.

Findings

The review divides outcomes into both theoretical overview and practical implication sections. Each aspect of the Monte Carlo simulation can be addressed separately, reviewed and then incorporated as a potential component of small arms combat modeling. This application allows for new human performance practitioners to more quickly adopt the method for different applications.

Originality/value

Performance implications are often presented as inferential statistics. By using the Monte Carlo simulations, practitioners can present outcomes in terms of lethality. This method should help convey the impact of any marksmanship evaluation to senior leadership better than current inferential statistics, such as effect size measures.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 2
Type: Research Article
ISSN: 2399-6439

Keywords

Open Access
Article
Publication date: 6 September 2022

Paul Roelofsen and Kaspar Jansen

The purpose of this study is to analyze the question “In what order of magnitude does the comfort and performance improvement lie with the use of a cooling vest for construction…

1228

Abstract

Purpose

The purpose of this study is to analyze the question “In what order of magnitude does the comfort and performance improvement lie with the use of a cooling vest for construction workers?”.

Design/methodology/approach

The use of personal cooling systems, in the form of cooling vests, is not only intended to reduce the heat load, in order to prevent disruption of the thermoregulation system of the body, but also to improve work performance. A calculation study was carried out on the basis of four validated mathematical models, namely a cooling vest model, a thermophysiological human model, a dynamic thermal sensation model and a performance loss model for construction workers.

Findings

The use of a cooling vest has a significant beneficial effect on the thermal sensation and the loss of performance, depending on the thermal load on the body.

Research limitations/implications

Each cooling vest can be characterized on the basis of the maximum cooling power (Pmax; in W/m²), the cooling capacity (Auc; in Wh/m2) and the time (tc; in minutes) after which the cooling power is negligible. In order to objectively compare cooling vests, a (preferably International and/or European) standard/guideline must be compiled to determine the cooling power and the cooling capacity of cooling vests.

Practical implications

It is recommended to implement the use of cooling vests in the construction process so that employees can use them if necessary or desired.

Social implications

Climate change, resulting in global warming, is one of the biggest problems of present times. Rising outdoor temperatures will continue in the 21st century, with a greater frequency and duration of heat waves. Some regions of the world are more affected than others. Europe is one of the regions of the world where rising global temperatures will adversely affect public health, especially that of the labor force, resulting in a decline in labor productivity. It will be clear that in many situations air conditioning is not an option because it does not provide sufficient cooling or it is a very expensive investment; for example, in the situation of construction work. In such a situation, personal cooling systems, such as cooling vests, can be an efficient and financially attractive solution to the problem of discomfort and heat stress.

Originality/value

The value of the study lies in the link between four validated mathematical models, namely a cooling vest model, a thermophysiological human model, a dynamic thermal sensation model and a performance loss model for construction workers.

Details

International Journal of Clothing Science and Technology, vol. 35 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Open Access
Article
Publication date: 25 March 2024

Florian Follert and Werner Gleißner

From the buying club’s perspective, the transfer of a player can be interpreted as an investment from which the club expects uncertain future benefits. This paper aims to develop…

Abstract

Purpose

From the buying club’s perspective, the transfer of a player can be interpreted as an investment from which the club expects uncertain future benefits. This paper aims to develop a decision-oriented approach for the valuation of football players that could theoretically help clubs determine the subjective value of investing in a player to assess its potential economic advantage.

Design/methodology/approach

We build on a semi-investment-theoretical risk-value model and elaborate an approach that can be applied in imperfect markets under uncertainty. Furthermore, we illustrate the valuation process with a numerical example based on fictitious data. Due to this explicitly intended decision support, our approach differs fundamentally from a large part of the literature, which is empirically based and attempts to explain observable figures through various influencing factors.

Findings

We propose a semi-investment-theoretical valuation approach that is based on a two-step model, namely, a first valuation at the club level and a final calculation to determine the decision value for an individual player. In contrast to the previous literature, we do not rely on an econometric framework that attempts to explain observable past variables but rather present a general, forward-looking decision model that can support managers in their investment decisions.

Originality/value

This approach is the first to show managers how to make an economically rational investment decision by determining the maximum payable price. Nevertheless, there is no normative requirement for the decision-maker. The club will obviously have to supplement the calculus with nonfinancial objectives. Overall, our paper can constitute a first step toward decision-oriented player valuation and for theoretical comparison with practical investment decisions in football clubs, which obviously take into account other specific sports team decisions.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Open Access
Article
Publication date: 7 December 2022

T.O.M. Forslund, I.A.S. Larsson, J.G.I. Hellström and T.S. Lundström

The purpose of this paper is to present a fast and bare bones implementation of a numerical method for quickly simulating turbulent thermal flows on GPUs. The work also validates…

Abstract

Purpose

The purpose of this paper is to present a fast and bare bones implementation of a numerical method for quickly simulating turbulent thermal flows on GPUs. The work also validates earlier research showing that the lattice Boltzmann method (LBM) method is suitable for complex thermal flows.

Design/methodology/approach

A dual lattice hydrodynamic (D3Q27) thermal (D3Q7) multiple-relaxation time LBM model capable of thermal DNS calculations is implemented in CUDA.

Findings

The model has the same computational performance compared to earlier publications of similar LBM solvers. The solver is validated against three benchmark cases for turbulent thermal flow with available data and is shown to be in excellent agreement.

Originality/value

The combination of a D3Q27 and D3Q7 stencil for a multiple relaxation time -LBM has, to the authors’ knowledge, not been used for simulations of thermal flows. The code is made available in a public repository under a free license.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 5
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 7 April 2020

Sugiarto Sugiarto and Suroso Suroso

This study aims to develop a high-quality impairment loss allowance model in conformity with Indonesian Financial Accounting Standards 71 (PSAK 71) that has significant…

3183

Abstract

Purpose

This study aims to develop a high-quality impairment loss allowance model in conformity with Indonesian Financial Accounting Standards 71 (PSAK 71) that has significant contribution to national interests and the banking industry.

Design/methodology/approach

The determination of the impairment loss allowance model is settled through 7 stages, using integration of some statistical methods such as Markov chain, exponential smoothing, time series analysis of behavioral inherent trends of probability of default, tail conditional expectation and Monte Carlo simulation.

Findings

The model which is developed by the authors is proven to be a high-quality and reliable model. By using the model, it can be shown that the implementation of the expected credit losses model on Indonesian Financial Accounting Standards 71 is more prudent than the implementation of the incurred loss model on Indonesian Financial Accounting Standards 55.

Research limitations/implications

Determination of defaults was based on days past due, and the analysis in this study did not touch the aspects of hedge accounting in general.

Practical implications

This developed model will contribute significantly to national interests as a source of reference for other banks operating in Indonesia in calculating impairment loss allowance (CKPN) and can be used by the Financial Services Authority of Indonesia (OJK) as a guideline in assessing the formation of impairment loss allowance for banks operating in Indonesia.

Originality/value

As so far there is not yet an available standardized model for calculating impairment loss allowance on the basis of Indonesian Financial Accounting Standards 71, the model developed by the authors will be a new breakthrough in Indonesia.

Details

Journal of Asian Business and Economic Studies, vol. 27 no. 3
Type: Research Article
ISSN: 2515-964X

Keywords

Content available
Book part
Publication date: 30 July 2018

Abstract

Details

Marketing Management in Turkey
Type: Book
ISBN: 978-1-78714-558-0

Open Access
Article
Publication date: 8 March 2023

Rianne Appel-Meulenbroek and Vitalija Danivska

Business case (BC) analyses are performed in many different business fields, to create a report on the feasibility and competitive advantage of an intervention within an existing…

1662

Abstract

Purpose

Business case (BC) analyses are performed in many different business fields, to create a report on the feasibility and competitive advantage of an intervention within an existing organisation to secure commitment from management to invest. However, most BC research papers on decisions regarding internal funding are either based on anecdotal insights, on analyses of standards from practice, or focused on very specific BC calculations for a certain project, investment or field. A clear BC process method is missing.

Design/methodology/approach

This paper aims to describe the results of a systematic literature review of 52 BC papers that report on further conceptualisation of what a BC process should behold.

Findings

Synthesis of the findings has led to a BC definition and composition of a 20 step BC process method. In addition, 29 relevant theories are identified to tackle the main challenges of BC analyses in future studies to make them more effective. This supports further theoretical development of academic BC research and provides a tool for BC processes in practice.

Originality/value

Although there is substantial scientific research on BCs, there was not much theoretical development nor a general stepwise method to perform the most optimal BC analysis.

Details

Business Process Management Journal, vol. 29 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

Content available
Book part
Publication date: 10 October 2017

Hans Mikkelsen and Jens O. Riis

Abstract

Details

Project Management
Type: Book
ISBN: 978-1-78714-830-7

1 – 10 of over 1000