Search results
1 – 10 of over 76000M.R. Crask and D.B. McKay
Attention has been paid recently to the retail‐consumer link in the distribution channel. The importance of this attention, both for the retailer's revenues and the consumer's…
Abstract
Attention has been paid recently to the retail‐consumer link in the distribution channel. The importance of this attention, both for the retailer's revenues and the consumer's satisfaction, is obvious, but the way in which this link should be modelled is not obvious. A critical component for any such model is a measure of retail‐consumer separation or distance. In this article a measure of cognitive distance is proposed and evaluated with encouraging results.
Vassiliki A. Koutsonikola, Sophia G. Petridou, Athena I. Vakali and Georgios I. Papadimitriou
Web users' clustering is an important mining task since it contributes in identifying usage patterns, a beneficial task for a wide range of applications that rely on the web. The…
Abstract
Purpose
Web users' clustering is an important mining task since it contributes in identifying usage patterns, a beneficial task for a wide range of applications that rely on the web. The purpose of this paper is to examine the usage of Kullback‐Leibler (KL) divergence, an information theoretic distance, as an alternative option for measuring distances in web users clustering.
Design/methodology/approach
KL‐divergence is compared with other well‐known distance measures and clustering results are evaluated using a criterion function, validity indices, and graphical representations. Furthermore, the impact of noise (i.e. occasional or mistaken page visits) is evaluated, since it is imperative to assess whether a clustering process exhibits tolerance in noisy environments such as the web.
Findings
The proposed KL clustering approach is of similar performance when compared with other distance measures under both synthetic and real data workloads. Moreover, imposing extra noise on real data, the approach shows minimum deterioration among most of the other conventional distance measures.
Practical implications
The experimental results show that a probabilistic measure such as KL‐divergence has proven to be quite efficient in noisy environments and thus constitute a good alternative, the web users clustering problem.
Originality/value
This work is inspired by the usage of divergence in clustering of biological data and it is introduced by the authors in the area of web clustering. According to the experimental results presented in this paper, KL‐divergence can be considered as a good alternative for measuring distances in noisy environments such as the web.
Details
Keywords
Shumpei Haginoya, Aiko Hanayama and Tamae Koike
The purpose of this paper was to compare the accuracy of linking crimes using geographical proximity between three distance measures: Euclidean (distance measured by the length of…
Abstract
Purpose
The purpose of this paper was to compare the accuracy of linking crimes using geographical proximity between three distance measures: Euclidean (distance measured by the length of a straight line between two locations), Manhattan (distance obtained by summing north-south distance and east-west distance) and the shortest route distances.
Design/methodology/approach
A total of 194 cases committed by 97 serial residential burglars in Aomori Prefecture in Japan between 2004 and 2015 were used in the present study. The Mann–Whitney U test was used to compare linked (two offenses committed by the same offender) and unlinked (two offenses committed by different offenders) pairs for each distance measure. Discrimination accuracy between linked and unlinked crime pairs was evaluated using area under the receiver operating characteristic curve (AUC).
Findings
The Mann–Whitney U test showed that the distances of the linked pairs were significantly shorter than those of the unlinked pairs for all distance measures. Comparison of the AUCs showed that the shortest route distance achieved significantly higher accuracy compared with the Euclidean distance, whereas there was no significant difference between the Euclidean and the Manhattan distance or between the Manhattan and the shortest route distance. These findings give partial support to the idea that distance measures taking the impact of environmental factors into consideration might be able to identify a crime series more accurately than Euclidean distances.
Research limitations/implications
Although the results suggested a difference between the Euclidean and the shortest route distance, it was small, and all distance measures resulted in outstanding AUC values, probably because of the ceiling effects. Further investigation that makes the same comparison in a narrower area is needed to avoid this potential inflation of discrimination accuracy.
Practical implications
The shortest route distance might contribute to improving the accuracy of crime linkage based on geographical proximity. However, further investigation is needed to recommend using the shortest route distance in practice. Given that the targeted area in the present study was relatively large, the findings may contribute especially to improve the accuracy of proactive comparative case analysis for estimating the whole picture of the distribution of serial crimes in the region by selecting more effective distance measure.
Social implications
Implications to improve the accuracy in linking crimes may contribute to assisting crime investigations and the earlier arrest of offenders.
Originality/value
The results of the present study provide an initial indication of the efficacy of using distance measures taking environmental factors into account.
Details
Keywords
Vasco Sanchez Rodrigues, John Cowburn, Andrew Potter, Mohamed Naim and Anthony Whiteing
The purpose of this paper is to develop a measure that links the causes and consequences of disruptions in freight transport operations. Such a measure is needed to quantify the…
Abstract
Purpose
The purpose of this paper is to develop a measure that links the causes and consequences of disruptions in freight transport operations. Such a measure is needed to quantify the scale of impact and identify the root causes of disruptions.
Design/methodology/approach
In order to develop this measure, an inductive approach was adopted, using four primary case studies to test the measure in an industrial environment. The case studies are from the fast moving consumer goods sector with primary and secondary distribution networks included. The “Extra Distance” measure has been evaluated against established generic criteria that define the quality of any performance measure.
Findings
The research indicates good compliance with the criteria used to evaluate the “Extra Distance” measure. The measure is also found to be useful for practitioners who are able to directly relate the measure to their distribution network operations.
Research limitations/implications
Further research should see the “Extra Distance” measure further tested in other freight transport operations and industrial sectors.
Practical implications
The measure is directly related to a number of causes of uncertainty which helps freight transport managers to quickly identify potential solutions. The “Extra Distance” measure can be used to quantify the effects of disruptions which can occur in road freight transport networks generate unnecessary cost within distribution networks, potentially eroding profit margins which are known to be very low in the road freight transport industry.
Originality/value
This paper presents a novel approach to the assessment of the impact caused by uncertainty within freight transport operations.
Details
Keywords
The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…
Abstract
Purpose
The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.
Design/methodology/approach
DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.
Findings
All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.
Originality/value
As we are aware, this is the first survey in this topic.
Details
Keywords
Zhibin Zhou, Jongwook Kwon, Bo Zhang, Junjian Li, Hak cho Kim and Ji Hyun Heo
During the past several decades, national distance (ND) increasingly became a vital cornerstone in international business (IB) research, as both explicit and implicit distance are…
Abstract
Purpose
During the past several decades, national distance (ND) increasingly became a vital cornerstone in international business (IB) research, as both explicit and implicit distance are parts of the essential reasons for IB activities. However, there are various and chaotic methods to measure ND in the last literature; therefore, this paper aims to suggest legitimate uses of ND in the IB field and the best ND dimensions for various situations.
Design/methodology/approach
This paper used a historical overview of the theoretical background and conceptual development of ND based on the past four decades worth of studies in leading 17-IB journals using Google Scholar. The authors also focus on multiform ND measurement methods and details through qualitative and quantitative analysis based on previous studies’ data collection.
Findings
This research summarized the common measurement methods and elements of different ND and proposed solutions based on a multifaceted analysis.
Originality/value
The micro analysis examines each type of ND in terms of the proportion of variables, issues, measurement methods, representative proxies beyond previous studies. This research also tried to provide clarity and suggest solutions to these problems through our macro& micro-analysis.
Details
Keywords
André van Hoorn and Robbert Maseland
The purpose of this chapter is to make sense of the cultural distance paradox through a basic assessment of the cross-cultural comparability of cultural distance measures…
Abstract
Purpose
The purpose of this chapter is to make sense of the cultural distance paradox through a basic assessment of the cross-cultural comparability of cultural distance measures. Cultural distance between a base country and partner countries is a key construct in international business (IB). However, we propose that what exactly is measured by cultural distance is unique for each country that is chosen as the base country to/from which cultural distance to a set of partner countries is calculated.
Methodology/approach
We use a mathematical argument to establish that cultural distance may correlate rather differently with the culture of partner countries depending on which base country one considers, for example, the United States or China. We then use empirical analysis to show the relevance of this argument, using Hofstede’s data on national culture for 69 countries.
Findings
Results show that cultural distance indeed has very different correlations with partner country culture, depending on which country one selects as the base country in one’s distance calculations.
Practical implications
Implication of our findings is that measured cultural distance is not equivalent across different base countries. The effect of cultural distance on such issues as foreign market entry mode or market selection, therefore, lacks international generalizability.
Originality/value
This chapter presents the first assessment of the cross-cultural comparability of cultural distance. Paradoxical findings that plague extant cultural distance research may be understood from the found lack of measurement equivalence.
Details
Keywords
Mike Szymanski, Ivan Valdovinos and Evodio Kaltenecker
This study aims to examine the relationship between cultural distances between countries and their scores in the Corruption Perception Index (CPI), which is the most commonly used…
Abstract
Purpose
This study aims to examine the relationship between cultural distances between countries and their scores in the Corruption Perception Index (CPI), which is the most commonly used measure of corruption in international business (IB) research.
Design/methodology/approach
The authors applied fixed-effect (generalized least squares) statistical modeling technique to analyze 1,580 year-country observations.
Findings
The authors found that the CPI score is determined to a large extent by cultural distances between countries, specifically the distance to the USA and to Denmark.
Research limitations/implications
CPI is often used as a sole measure of state-level corruption in IB research. The results show that the measure is significantly influenced by cultural differences and hence it should be applied with great caution, preferably augmented with other measures.
Originality/value
To the best of the authors’ knowledge, this is the first study to look at cultural distances as determinants of CPI score. The authors empirically test whether the CPI is culturally biased.
Details